00:00:00.000 Started by upstream project "autotest-per-patch" build number 121340 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.208 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.208 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.760 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.771 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.782 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:04.782 > git config core.sparsecheckout # timeout=10 00:00:04.791 > git read-tree -mu HEAD # timeout=10 00:00:04.808 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:04.830 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:04.830 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:04.940 [Pipeline] Start of Pipeline 00:00:04.955 [Pipeline] library 00:00:04.956 Loading library shm_lib@master 00:00:04.957 Library shm_lib@master is cached. Copying from home. 00:00:04.969 [Pipeline] node 00:00:04.977 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:00:04.979 [Pipeline] { 00:00:04.989 [Pipeline] catchError 00:00:04.991 [Pipeline] { 00:00:05.002 [Pipeline] wrap 00:00:05.008 [Pipeline] { 00:00:05.013 [Pipeline] stage 00:00:05.014 [Pipeline] { (Prologue) 00:00:05.027 [Pipeline] echo 00:00:05.028 Node: VM-host-SM0 00:00:05.032 [Pipeline] cleanWs 00:00:05.039 [WS-CLEANUP] Deleting project workspace... 00:00:05.039 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.044 [WS-CLEANUP] done 00:00:05.223 [Pipeline] setCustomBuildProperty 00:00:05.275 [Pipeline] nodesByLabel 00:00:05.276 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.285 [Pipeline] httpRequest 00:00:05.289 HttpMethod: GET 00:00:05.290 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:05.290 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:05.308 Response Code: HTTP/1.1 200 OK 00:00:05.308 Success: Status code 200 is in the accepted range: 200,404 00:00:05.309 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:27.109 [Pipeline] sh 00:00:27.389 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:27.409 [Pipeline] httpRequest 00:00:27.414 HttpMethod: GET 00:00:27.414 URL: http://10.211.164.96/packages/spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:00:27.415 Sending request to url: http://10.211.164.96/packages/spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:00:27.421 Response Code: HTTP/1.1 200 OK 00:00:27.421 Success: Status code 200 is in the accepted range: 200,404 00:00:27.422 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:01:58.245 [Pipeline] sh 00:01:58.519 + tar --no-same-owner -xf spdk_6651b13f785a407c9e2f93ac35ba429108401909.tar.gz 00:02:01.814 [Pipeline] sh 00:02:02.106 + git -C spdk log --oneline -n5 00:02:02.106 6651b13f7 test/scheduler: Enable load_balancing test back 00:02:02.106 c3fd276bb test/scheduler: Stop using cgroups 00:02:02.106 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:02:02.106 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:02:02.106 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:02:02.126 [Pipeline] writeFile 00:02:02.142 [Pipeline] sh 00:02:02.423 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:02.435 [Pipeline] sh 00:02:02.714 + cat autorun-spdk.conf 00:02:02.714 SPDK_TEST_UNITTEST=1 00:02:02.714 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.714 SPDK_TEST_NVME=1 00:02:02.714 SPDK_TEST_BLOCKDEV=1 00:02:02.714 SPDK_RUN_ASAN=1 00:02:02.714 SPDK_RUN_UBSAN=1 00:02:02.714 SPDK_TEST_RAID5=1 00:02:02.714 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.721 RUN_NIGHTLY=0 00:02:02.725 [Pipeline] } 00:02:02.743 [Pipeline] // stage 00:02:02.760 [Pipeline] stage 00:02:02.763 [Pipeline] { (Run VM) 00:02:02.777 [Pipeline] sh 00:02:03.056 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:03.056 + echo 'Start stage prepare_nvme.sh' 00:02:03.056 Start stage prepare_nvme.sh 00:02:03.056 + [[ -n 4 ]] 00:02:03.056 + disk_prefix=ex4 00:02:03.056 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest_2 ]] 00:02:03.056 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf ]] 00:02:03.056 + source /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf 00:02:03.056 ++ SPDK_TEST_UNITTEST=1 00:02:03.056 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.056 ++ SPDK_TEST_NVME=1 00:02:03.056 ++ SPDK_TEST_BLOCKDEV=1 00:02:03.056 ++ SPDK_RUN_ASAN=1 00:02:03.056 ++ SPDK_RUN_UBSAN=1 00:02:03.056 ++ SPDK_TEST_RAID5=1 00:02:03.056 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.056 ++ RUN_NIGHTLY=0 00:02:03.056 + cd /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:02:03.056 + nvme_files=() 00:02:03.056 + declare -A nvme_files 00:02:03.056 + backend_dir=/var/lib/libvirt/images/backends 00:02:03.056 + nvme_files['nvme.img']=5G 00:02:03.056 + nvme_files['nvme-cmb.img']=5G 00:02:03.056 + nvme_files['nvme-multi0.img']=4G 00:02:03.056 + nvme_files['nvme-multi1.img']=4G 00:02:03.056 + nvme_files['nvme-multi2.img']=4G 00:02:03.056 + nvme_files['nvme-openstack.img']=8G 00:02:03.056 + nvme_files['nvme-zns.img']=5G 00:02:03.056 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:03.056 + (( SPDK_TEST_FTL == 1 )) 00:02:03.056 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:03.056 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:03.056 + for nvme in "${!nvme_files[@]}" 00:02:03.056 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:03.057 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.057 + for nvme in "${!nvme_files[@]}" 00:02:03.057 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:03.057 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.057 + for nvme in "${!nvme_files[@]}" 00:02:03.057 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:03.057 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:03.057 + for nvme in "${!nvme_files[@]}" 00:02:03.057 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:03.057 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.057 + for nvme in "${!nvme_files[@]}" 00:02:03.057 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:03.057 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.057 + for nvme in "${!nvme_files[@]}" 00:02:03.057 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:03.057 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.057 + for nvme in "${!nvme_files[@]}" 00:02:03.057 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:03.315 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.315 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:03.315 + echo 'End stage prepare_nvme.sh' 00:02:03.315 End stage prepare_nvme.sh 00:02:03.327 [Pipeline] sh 00:02:03.606 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:03.606 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2204 00:02:03.606 00:02:03.606 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant 00:02:03.606 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk 00:02:03.606 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest_2 00:02:03.606 HELP=0 00:02:03.606 DRY_RUN=0 00:02:03.606 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:02:03.606 NVME_DISKS_TYPE=nvme, 00:02:03.606 NVME_AUTO_CREATE=0 00:02:03.607 NVME_DISKS_NAMESPACES=, 00:02:03.607 NVME_CMB=, 00:02:03.607 NVME_PMR=, 00:02:03.607 NVME_ZNS=, 00:02:03.607 NVME_MS=, 00:02:03.607 NVME_FDP=, 00:02:03.607 SPDK_VAGRANT_DISTRO=ubuntu2204 00:02:03.607 SPDK_VAGRANT_VMCPU=10 00:02:03.607 SPDK_VAGRANT_VMRAM=12288 00:02:03.607 SPDK_VAGRANT_PROVIDER=libvirt 00:02:03.607 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:03.607 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:03.607 SPDK_OPENSTACK_NETWORK=0 00:02:03.607 VAGRANT_PACKAGE_BOX=0 00:02:03.607 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:03.607 FORCE_DISTRO=true 00:02:03.607 VAGRANT_BOX_VERSION= 00:02:03.607 EXTRA_VAGRANTFILES= 00:02:03.607 NIC_MODEL=e1000 00:02:03.607 00:02:03.607 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt' 00:02:03.607 /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:02:06.134 Bringing machine 'default' up with 'libvirt' provider... 00:02:07.068 ==> default: Creating image (snapshot of base box volume). 00:02:07.068 ==> default: Creating domain with the following settings... 00:02:07.068 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1714177300_9c7474050abce6850fab 00:02:07.068 ==> default: -- Domain type: kvm 00:02:07.068 ==> default: -- Cpus: 10 00:02:07.068 ==> default: -- Feature: acpi 00:02:07.068 ==> default: -- Feature: apic 00:02:07.068 ==> default: -- Feature: pae 00:02:07.068 ==> default: -- Memory: 12288M 00:02:07.068 ==> default: -- Memory Backing: hugepages: 00:02:07.068 ==> default: -- Management MAC: 00:02:07.068 ==> default: -- Loader: 00:02:07.068 ==> default: -- Nvram: 00:02:07.068 ==> default: -- Base box: spdk/ubuntu2204 00:02:07.068 ==> default: -- Storage pool: default 00:02:07.068 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1714177300_9c7474050abce6850fab.img (20G) 00:02:07.068 ==> default: -- Volume Cache: default 00:02:07.068 ==> default: -- Kernel: 00:02:07.068 ==> default: -- Initrd: 00:02:07.068 ==> default: -- Graphics Type: vnc 00:02:07.068 ==> default: -- Graphics Port: -1 00:02:07.068 ==> default: -- Graphics IP: 127.0.0.1 00:02:07.068 ==> default: -- Graphics Password: Not defined 00:02:07.068 ==> default: -- Video Type: cirrus 00:02:07.068 ==> default: -- Video VRAM: 9216 00:02:07.068 ==> default: -- Sound Type: 00:02:07.068 ==> default: -- Keymap: en-us 00:02:07.068 ==> default: -- TPM Path: 00:02:07.068 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:07.068 ==> default: -- Command line args: 00:02:07.068 ==> default: -> value=-device, 00:02:07.068 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:07.068 ==> default: -> value=-drive, 00:02:07.068 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:07.068 ==> default: -> value=-device, 00:02:07.068 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:07.327 ==> default: Creating shared folders metadata... 00:02:07.327 ==> default: Starting domain. 00:02:09.230 ==> default: Waiting for domain to get an IP address... 00:02:19.232 ==> default: Waiting for SSH to become available... 00:02:20.667 ==> default: Configuring and enabling network interfaces... 00:02:25.933 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:31.191 ==> default: Mounting SSHFS shared folder... 00:02:31.756 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:02:31.756 ==> default: Checking Mount.. 00:02:32.689 ==> default: Folder Successfully Mounted! 00:02:32.689 ==> default: Running provisioner: file... 00:02:32.947 default: ~/.gitconfig => .gitconfig 00:02:33.513 00:02:33.513 SUCCESS! 00:02:33.513 00:02:33.513 cd to /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:02:33.513 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:33.513 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt" to destroy all trace of vm. 00:02:33.513 00:02:33.521 [Pipeline] } 00:02:33.539 [Pipeline] // stage 00:02:33.549 [Pipeline] dir 00:02:33.550 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt 00:02:33.551 [Pipeline] { 00:02:33.565 [Pipeline] catchError 00:02:33.567 [Pipeline] { 00:02:33.579 [Pipeline] sh 00:02:33.858 + vagrant ssh-config --host vagrant 00:02:33.858 + sed -ne /^Host/,$p 00:02:33.858 + tee ssh_conf 00:02:38.043 Host vagrant 00:02:38.043 HostName 192.168.121.204 00:02:38.043 User vagrant 00:02:38.043 Port 22 00:02:38.043 UserKnownHostsFile /dev/null 00:02:38.043 StrictHostKeyChecking no 00:02:38.043 PasswordAuthentication no 00:02:38.043 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:02:38.043 IdentitiesOnly yes 00:02:38.043 LogLevel FATAL 00:02:38.043 ForwardAgent yes 00:02:38.043 ForwardX11 yes 00:02:38.043 00:02:38.057 [Pipeline] withEnv 00:02:38.060 [Pipeline] { 00:02:38.076 [Pipeline] sh 00:02:38.356 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:38.356 source /etc/os-release 00:02:38.356 [[ -e /image.version ]] && img=$(< /image.version) 00:02:38.356 # Minimal, systemd-like check. 00:02:38.356 if [[ -e /.dockerenv ]]; then 00:02:38.356 # Clear garbage from the node's name: 00:02:38.356 # agt-er_autotest_547-896 -> autotest_547-896 00:02:38.356 # $HOSTNAME is the actual container id 00:02:38.356 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:38.356 if mountpoint -q /etc/hostname; then 00:02:38.356 # We can assume this is a mount from a host where container is running, 00:02:38.356 # so fetch its hostname to easily identify the target swarm worker. 00:02:38.356 container="$(< /etc/hostname) ($agent)" 00:02:38.356 else 00:02:38.356 # Fallback 00:02:38.356 container=$agent 00:02:38.356 fi 00:02:38.356 fi 00:02:38.356 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:38.356 00:02:38.626 [Pipeline] } 00:02:38.644 [Pipeline] // withEnv 00:02:38.653 [Pipeline] setCustomBuildProperty 00:02:38.667 [Pipeline] stage 00:02:38.670 [Pipeline] { (Tests) 00:02:38.688 [Pipeline] sh 00:02:38.967 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:39.238 [Pipeline] timeout 00:02:39.238 Timeout set to expire in 1 hr 0 min 00:02:39.240 [Pipeline] { 00:02:39.256 [Pipeline] sh 00:02:39.535 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:40.100 HEAD is now at 6651b13f7 test/scheduler: Enable load_balancing test back 00:02:40.114 [Pipeline] sh 00:02:40.402 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:40.672 [Pipeline] sh 00:02:40.948 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:41.219 [Pipeline] sh 00:02:41.494 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:41.751 ++ readlink -f spdk_repo 00:02:41.751 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:41.751 + [[ -n /home/vagrant/spdk_repo ]] 00:02:41.751 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:41.751 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:41.751 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:41.751 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:41.751 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:41.751 + cd /home/vagrant/spdk_repo 00:02:41.751 + source /etc/os-release 00:02:41.751 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:02:41.751 ++ NAME=Ubuntu 00:02:41.751 ++ VERSION_ID=22.04 00:02:41.751 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:02:41.751 ++ VERSION_CODENAME=jammy 00:02:41.751 ++ ID=ubuntu 00:02:41.751 ++ ID_LIKE=debian 00:02:41.751 ++ HOME_URL=https://www.ubuntu.com/ 00:02:41.751 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:02:41.751 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:02:41.751 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:02:41.751 ++ UBUNTU_CODENAME=jammy 00:02:41.751 + uname -a 00:02:41.751 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:02:41.751 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:42.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:02:42.009 Hugepages 00:02:42.009 node hugesize free / total 00:02:42.009 node0 1048576kB 0 / 0 00:02:42.009 node0 2048kB 0 / 0 00:02:42.009 00:02:42.009 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:42.009 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:42.010 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:42.010 + rm -f /tmp/spdk-ld-path 00:02:42.010 + source autorun-spdk.conf 00:02:42.010 ++ SPDK_TEST_UNITTEST=1 00:02:42.010 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.010 ++ SPDK_TEST_NVME=1 00:02:42.010 ++ SPDK_TEST_BLOCKDEV=1 00:02:42.010 ++ SPDK_RUN_ASAN=1 00:02:42.010 ++ SPDK_RUN_UBSAN=1 00:02:42.010 ++ SPDK_TEST_RAID5=1 00:02:42.010 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.010 ++ RUN_NIGHTLY=0 00:02:42.010 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:42.010 + [[ -n '' ]] 00:02:42.010 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:42.010 + for M in /var/spdk/build-*-manifest.txt 00:02:42.010 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:42.010 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.010 + for M in /var/spdk/build-*-manifest.txt 00:02:42.010 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:42.010 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:42.010 ++ uname 00:02:42.010 + [[ Linux == \L\i\n\u\x ]] 00:02:42.010 + sudo dmesg -T 00:02:42.010 + sudo dmesg --clear 00:02:42.269 + dmesg_pid=2102 00:02:42.269 + sudo dmesg -Tw 00:02:42.269 + [[ Ubuntu == FreeBSD ]] 00:02:42.269 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.269 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:42.269 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:42.269 + [[ -x /usr/src/fio-static/fio ]] 00:02:42.269 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:42.269 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:42.269 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:42.269 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:42.269 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:42.269 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:42.269 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:42.269 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:42.269 Test configuration: 00:02:42.269 SPDK_TEST_UNITTEST=1 00:02:42.269 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:42.269 SPDK_TEST_NVME=1 00:02:42.269 SPDK_TEST_BLOCKDEV=1 00:02:42.269 SPDK_RUN_ASAN=1 00:02:42.269 SPDK_RUN_UBSAN=1 00:02:42.269 SPDK_TEST_RAID5=1 00:02:42.269 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:42.270 RUN_NIGHTLY=0 00:22:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:42.270 00:22:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:42.270 00:22:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.270 00:22:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.270 00:22:15 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:42.270 00:22:15 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:42.270 00:22:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:42.270 00:22:15 -- paths/export.sh@5 -- $ export PATH 00:02:42.270 00:22:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:42.270 00:22:15 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:42.270 00:22:15 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:42.270 00:22:15 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714177335.XXXXXX 00:02:42.270 00:22:15 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714177335.tOg6MD 00:02:42.270 00:22:15 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:42.270 00:22:15 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:42.270 00:22:15 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:42.270 00:22:15 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:42.270 00:22:15 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:42.270 00:22:15 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:42.270 00:22:15 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:02:42.270 00:22:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.270 00:22:15 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:02:42.270 00:22:15 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:02:42.270 00:22:15 -- pm/common@17 -- $ local monitor 00:02:42.270 00:22:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.270 00:22:15 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2138 00:02:42.270 00:22:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.270 00:22:15 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2140 00:02:42.270 00:22:15 -- pm/common@26 -- $ sleep 1 00:02:42.270 00:22:15 -- pm/common@21 -- $ date +%s 00:02:42.270 00:22:15 -- pm/common@21 -- $ date +%s 00:02:42.270 00:22:15 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714177335 00:02:42.270 00:22:15 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714177335 00:02:42.270 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714177335_collect-vmstat.pm.log 00:02:42.270 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714177335_collect-cpu-load.pm.log 00:02:43.203 00:22:16 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:02:43.203 00:22:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:43.203 00:22:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:43.203 00:22:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:43.203 00:22:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:43.203 Sat Apr 27 00:22:16 UTC 2024 00:02:43.203 00:22:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:43.203 v24.05-pre-451-g6651b13f7 00:02:43.203 00:22:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:43.203 00:22:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:43.203 00:22:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:43.203 00:22:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:43.203 00:22:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.461 ************************************ 00:02:43.461 START TEST asan 00:02:43.461 ************************************ 00:02:43.461 using asan 00:02:43.461 00:22:16 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:02:43.461 00:02:43.461 real 0m0.000s 00:02:43.461 user 0m0.000s 00:02:43.461 sys 0m0.000s 00:02:43.461 00:22:16 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:43.461 00:22:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.461 ************************************ 00:02:43.461 END TEST asan 00:02:43.461 ************************************ 00:02:43.461 00:22:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:43.461 00:22:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:43.461 00:22:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:43.461 00:22:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:43.461 00:22:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.461 ************************************ 00:02:43.461 START TEST ubsan 00:02:43.461 ************************************ 00:02:43.461 using ubsan 00:02:43.461 00:22:16 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:02:43.461 00:02:43.461 real 0m0.000s 00:02:43.461 user 0m0.000s 00:02:43.461 sys 0m0.000s 00:02:43.461 00:22:16 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:43.461 ************************************ 00:02:43.461 END TEST ubsan 00:02:43.461 00:22:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.461 ************************************ 00:02:43.461 00:22:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:43.461 00:22:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:43.461 00:22:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:43.461 00:22:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:43.461 00:22:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:43.461 00:22:16 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:43.461 00:22:16 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:43.461 00:22:16 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:02:43.461 00:22:16 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:43.461 00:22:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:43.461 00:22:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.461 ************************************ 00:02:43.461 START TEST unittest_build 00:02:43.461 ************************************ 00:02:43.461 00:22:16 -- common/autotest_common.sh@1111 -- $ _unittest_build 00:02:43.461 00:22:16 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:02:43.461 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:43.461 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:44.027 Using 'verbs' RDMA provider 00:02:59.513 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:11.733 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:11.733 Creating mk/config.mk...done. 00:03:11.733 Creating mk/cc.flags.mk...done. 00:03:11.733 Type 'make' to build. 00:03:11.733 00:22:43 -- common/autobuild_common.sh@403 -- $ make -j10 00:03:11.733 make[1]: Nothing to be done for 'all'. 00:03:26.628 The Meson build system 00:03:26.628 Version: 1.4.0 00:03:26.628 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:26.628 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:26.628 Build type: native build 00:03:26.628 Program cat found: YES (/usr/bin/cat) 00:03:26.628 Project name: DPDK 00:03:26.628 Project version: 23.11.0 00:03:26.628 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:03:26.628 C linker for the host machine: cc ld.bfd 2.38 00:03:26.628 Host machine cpu family: x86_64 00:03:26.628 Host machine cpu: x86_64 00:03:26.628 Message: ## Building in Developer Mode ## 00:03:26.628 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:26.628 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:26.628 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:26.628 Program python3 found: YES (/usr/bin/python3) 00:03:26.628 Program cat found: YES (/usr/bin/cat) 00:03:26.628 Compiler for C supports arguments -march=native: YES 00:03:26.628 Checking for size of "void *" : 8 00:03:26.628 Checking for size of "void *" : 8 (cached) 00:03:26.628 Library m found: YES 00:03:26.628 Library numa found: YES 00:03:26.628 Has header "numaif.h" : YES 00:03:26.628 Library fdt found: NO 00:03:26.628 Library execinfo found: NO 00:03:26.628 Has header "execinfo.h" : YES 00:03:26.628 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:03:26.628 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:26.628 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:26.628 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:26.628 Run-time dependency openssl found: YES 3.0.2 00:03:26.628 Run-time dependency libpcap found: NO (tried pkgconfig) 00:03:26.628 Library pcap found: NO 00:03:26.628 Compiler for C supports arguments -Wcast-qual: YES 00:03:26.628 Compiler for C supports arguments -Wdeprecated: YES 00:03:26.628 Compiler for C supports arguments -Wformat: YES 00:03:26.628 Compiler for C supports arguments -Wformat-nonliteral: YES 00:03:26.628 Compiler for C supports arguments -Wformat-security: YES 00:03:26.628 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:26.628 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:26.628 Compiler for C supports arguments -Wnested-externs: YES 00:03:26.628 Compiler for C supports arguments -Wold-style-definition: YES 00:03:26.628 Compiler for C supports arguments -Wpointer-arith: YES 00:03:26.628 Compiler for C supports arguments -Wsign-compare: YES 00:03:26.628 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:26.628 Compiler for C supports arguments -Wundef: YES 00:03:26.628 Compiler for C supports arguments -Wwrite-strings: YES 00:03:26.628 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:26.628 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:26.628 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:26.628 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:26.628 Program objdump found: YES (/usr/bin/objdump) 00:03:26.628 Compiler for C supports arguments -mavx512f: YES 00:03:26.628 Checking if "AVX512 checking" compiles: YES 00:03:26.628 Fetching value of define "__SSE4_2__" : 1 00:03:26.628 Fetching value of define "__AES__" : 1 00:03:26.628 Fetching value of define "__AVX__" : 1 00:03:26.628 Fetching value of define "__AVX2__" : 1 00:03:26.628 Fetching value of define "__AVX512BW__" : (undefined) 00:03:26.628 Fetching value of define "__AVX512CD__" : (undefined) 00:03:26.628 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:26.628 Fetching value of define "__AVX512F__" : (undefined) 00:03:26.628 Fetching value of define "__AVX512VL__" : (undefined) 00:03:26.628 Fetching value of define "__PCLMUL__" : 1 00:03:26.628 Fetching value of define "__RDRND__" : 1 00:03:26.628 Fetching value of define "__RDSEED__" : 1 00:03:26.628 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:26.628 Fetching value of define "__znver1__" : (undefined) 00:03:26.628 Fetching value of define "__znver2__" : (undefined) 00:03:26.628 Fetching value of define "__znver3__" : (undefined) 00:03:26.628 Fetching value of define "__znver4__" : (undefined) 00:03:26.628 Library asan found: YES 00:03:26.628 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:26.628 Message: lib/log: Defining dependency "log" 00:03:26.628 Message: lib/kvargs: Defining dependency "kvargs" 00:03:26.628 Message: lib/telemetry: Defining dependency "telemetry" 00:03:26.628 Library rt found: YES 00:03:26.628 Checking for function "getentropy" : NO 00:03:26.628 Message: lib/eal: Defining dependency "eal" 00:03:26.628 Message: lib/ring: Defining dependency "ring" 00:03:26.628 Message: lib/rcu: Defining dependency "rcu" 00:03:26.628 Message: lib/mempool: Defining dependency "mempool" 00:03:26.628 Message: lib/mbuf: Defining dependency "mbuf" 00:03:26.628 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:26.628 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:26.628 Compiler for C supports arguments -mpclmul: YES 00:03:26.628 Compiler for C supports arguments -maes: YES 00:03:26.628 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:26.628 Compiler for C supports arguments -mavx512bw: YES 00:03:26.628 Compiler for C supports arguments -mavx512dq: YES 00:03:26.628 Compiler for C supports arguments -mavx512vl: YES 00:03:26.628 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:26.628 Compiler for C supports arguments -mavx2: YES 00:03:26.628 Compiler for C supports arguments -mavx: YES 00:03:26.628 Message: lib/net: Defining dependency "net" 00:03:26.628 Message: lib/meter: Defining dependency "meter" 00:03:26.628 Message: lib/ethdev: Defining dependency "ethdev" 00:03:26.628 Message: lib/pci: Defining dependency "pci" 00:03:26.628 Message: lib/cmdline: Defining dependency "cmdline" 00:03:26.628 Message: lib/hash: Defining dependency "hash" 00:03:26.628 Message: lib/timer: Defining dependency "timer" 00:03:26.628 Message: lib/compressdev: Defining dependency "compressdev" 00:03:26.628 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:26.628 Message: lib/dmadev: Defining dependency "dmadev" 00:03:26.628 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:26.628 Message: lib/power: Defining dependency "power" 00:03:26.628 Message: lib/reorder: Defining dependency "reorder" 00:03:26.628 Message: lib/security: Defining dependency "security" 00:03:26.628 Has header "linux/userfaultfd.h" : YES 00:03:26.628 Has header "linux/vduse.h" : YES 00:03:26.628 Message: lib/vhost: Defining dependency "vhost" 00:03:26.628 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:26.628 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:26.628 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:26.628 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:26.628 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:26.628 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:26.628 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:26.628 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:26.628 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:26.628 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:26.628 Program doxygen found: YES (/usr/bin/doxygen) 00:03:26.628 Configuring doxy-api-html.conf using configuration 00:03:26.628 Configuring doxy-api-man.conf using configuration 00:03:26.628 Program mandb found: YES (/usr/bin/mandb) 00:03:26.628 Program sphinx-build found: NO 00:03:26.628 Configuring rte_build_config.h using configuration 00:03:26.628 Message: 00:03:26.628 ================= 00:03:26.629 Applications Enabled 00:03:26.629 ================= 00:03:26.629 00:03:26.629 apps: 00:03:26.629 00:03:26.629 00:03:26.629 Message: 00:03:26.629 ================= 00:03:26.629 Libraries Enabled 00:03:26.629 ================= 00:03:26.629 00:03:26.629 libs: 00:03:26.629 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:26.629 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:26.629 cryptodev, dmadev, power, reorder, security, vhost, 00:03:26.629 00:03:26.629 Message: 00:03:26.629 =============== 00:03:26.629 Drivers Enabled 00:03:26.629 =============== 00:03:26.629 00:03:26.629 common: 00:03:26.629 00:03:26.629 bus: 00:03:26.629 pci, vdev, 00:03:26.629 mempool: 00:03:26.629 ring, 00:03:26.629 dma: 00:03:26.629 00:03:26.629 net: 00:03:26.629 00:03:26.629 crypto: 00:03:26.629 00:03:26.629 compress: 00:03:26.629 00:03:26.629 vdpa: 00:03:26.629 00:03:26.629 00:03:26.629 Message: 00:03:26.629 ================= 00:03:26.629 Content Skipped 00:03:26.629 ================= 00:03:26.629 00:03:26.629 apps: 00:03:26.629 dumpcap: explicitly disabled via build config 00:03:26.629 graph: explicitly disabled via build config 00:03:26.629 pdump: explicitly disabled via build config 00:03:26.629 proc-info: explicitly disabled via build config 00:03:26.629 test-acl: explicitly disabled via build config 00:03:26.629 test-bbdev: explicitly disabled via build config 00:03:26.629 test-cmdline: explicitly disabled via build config 00:03:26.629 test-compress-perf: explicitly disabled via build config 00:03:26.629 test-crypto-perf: explicitly disabled via build config 00:03:26.629 test-dma-perf: explicitly disabled via build config 00:03:26.629 test-eventdev: explicitly disabled via build config 00:03:26.629 test-fib: explicitly disabled via build config 00:03:26.629 test-flow-perf: explicitly disabled via build config 00:03:26.629 test-gpudev: explicitly disabled via build config 00:03:26.629 test-mldev: explicitly disabled via build config 00:03:26.629 test-pipeline: explicitly disabled via build config 00:03:26.629 test-pmd: explicitly disabled via build config 00:03:26.629 test-regex: explicitly disabled via build config 00:03:26.629 test-sad: explicitly disabled via build config 00:03:26.629 test-security-perf: explicitly disabled via build config 00:03:26.629 00:03:26.629 libs: 00:03:26.629 metrics: explicitly disabled via build config 00:03:26.629 acl: explicitly disabled via build config 00:03:26.629 bbdev: explicitly disabled via build config 00:03:26.629 bitratestats: explicitly disabled via build config 00:03:26.629 bpf: explicitly disabled via build config 00:03:26.629 cfgfile: explicitly disabled via build config 00:03:26.629 distributor: explicitly disabled via build config 00:03:26.629 efd: explicitly disabled via build config 00:03:26.629 eventdev: explicitly disabled via build config 00:03:26.629 dispatcher: explicitly disabled via build config 00:03:26.629 gpudev: explicitly disabled via build config 00:03:26.629 gro: explicitly disabled via build config 00:03:26.629 gso: explicitly disabled via build config 00:03:26.629 ip_frag: explicitly disabled via build config 00:03:26.629 jobstats: explicitly disabled via build config 00:03:26.629 latencystats: explicitly disabled via build config 00:03:26.629 lpm: explicitly disabled via build config 00:03:26.629 member: explicitly disabled via build config 00:03:26.629 pcapng: explicitly disabled via build config 00:03:26.629 rawdev: explicitly disabled via build config 00:03:26.629 regexdev: explicitly disabled via build config 00:03:26.629 mldev: explicitly disabled via build config 00:03:26.629 rib: explicitly disabled via build config 00:03:26.629 sched: explicitly disabled via build config 00:03:26.629 stack: explicitly disabled via build config 00:03:26.629 ipsec: explicitly disabled via build config 00:03:26.629 pdcp: explicitly disabled via build config 00:03:26.629 fib: explicitly disabled via build config 00:03:26.629 port: explicitly disabled via build config 00:03:26.629 pdump: explicitly disabled via build config 00:03:26.629 table: explicitly disabled via build config 00:03:26.629 pipeline: explicitly disabled via build config 00:03:26.629 graph: explicitly disabled via build config 00:03:26.629 node: explicitly disabled via build config 00:03:26.629 00:03:26.629 drivers: 00:03:26.629 common/cpt: not in enabled drivers build config 00:03:26.629 common/dpaax: not in enabled drivers build config 00:03:26.629 common/iavf: not in enabled drivers build config 00:03:26.629 common/idpf: not in enabled drivers build config 00:03:26.629 common/mvep: not in enabled drivers build config 00:03:26.629 common/octeontx: not in enabled drivers build config 00:03:26.629 bus/auxiliary: not in enabled drivers build config 00:03:26.629 bus/cdx: not in enabled drivers build config 00:03:26.629 bus/dpaa: not in enabled drivers build config 00:03:26.629 bus/fslmc: not in enabled drivers build config 00:03:26.629 bus/ifpga: not in enabled drivers build config 00:03:26.629 bus/platform: not in enabled drivers build config 00:03:26.629 bus/vmbus: not in enabled drivers build config 00:03:26.629 common/cnxk: not in enabled drivers build config 00:03:26.629 common/mlx5: not in enabled drivers build config 00:03:26.629 common/nfp: not in enabled drivers build config 00:03:26.629 common/qat: not in enabled drivers build config 00:03:26.629 common/sfc_efx: not in enabled drivers build config 00:03:26.629 mempool/bucket: not in enabled drivers build config 00:03:26.629 mempool/cnxk: not in enabled drivers build config 00:03:26.629 mempool/dpaa: not in enabled drivers build config 00:03:26.629 mempool/dpaa2: not in enabled drivers build config 00:03:26.629 mempool/octeontx: not in enabled drivers build config 00:03:26.629 mempool/stack: not in enabled drivers build config 00:03:26.629 dma/cnxk: not in enabled drivers build config 00:03:26.629 dma/dpaa: not in enabled drivers build config 00:03:26.629 dma/dpaa2: not in enabled drivers build config 00:03:26.629 dma/hisilicon: not in enabled drivers build config 00:03:26.629 dma/idxd: not in enabled drivers build config 00:03:26.629 dma/ioat: not in enabled drivers build config 00:03:26.629 dma/skeleton: not in enabled drivers build config 00:03:26.629 net/af_packet: not in enabled drivers build config 00:03:26.629 net/af_xdp: not in enabled drivers build config 00:03:26.629 net/ark: not in enabled drivers build config 00:03:26.629 net/atlantic: not in enabled drivers build config 00:03:26.629 net/avp: not in enabled drivers build config 00:03:26.629 net/axgbe: not in enabled drivers build config 00:03:26.629 net/bnx2x: not in enabled drivers build config 00:03:26.629 net/bnxt: not in enabled drivers build config 00:03:26.629 net/bonding: not in enabled drivers build config 00:03:26.629 net/cnxk: not in enabled drivers build config 00:03:26.629 net/cpfl: not in enabled drivers build config 00:03:26.629 net/cxgbe: not in enabled drivers build config 00:03:26.629 net/dpaa: not in enabled drivers build config 00:03:26.629 net/dpaa2: not in enabled drivers build config 00:03:26.629 net/e1000: not in enabled drivers build config 00:03:26.629 net/ena: not in enabled drivers build config 00:03:26.629 net/enetc: not in enabled drivers build config 00:03:26.629 net/enetfec: not in enabled drivers build config 00:03:26.629 net/enic: not in enabled drivers build config 00:03:26.629 net/failsafe: not in enabled drivers build config 00:03:26.629 net/fm10k: not in enabled drivers build config 00:03:26.629 net/gve: not in enabled drivers build config 00:03:26.629 net/hinic: not in enabled drivers build config 00:03:26.629 net/hns3: not in enabled drivers build config 00:03:26.629 net/i40e: not in enabled drivers build config 00:03:26.629 net/iavf: not in enabled drivers build config 00:03:26.629 net/ice: not in enabled drivers build config 00:03:26.629 net/idpf: not in enabled drivers build config 00:03:26.629 net/igc: not in enabled drivers build config 00:03:26.629 net/ionic: not in enabled drivers build config 00:03:26.629 net/ipn3ke: not in enabled drivers build config 00:03:26.629 net/ixgbe: not in enabled drivers build config 00:03:26.629 net/mana: not in enabled drivers build config 00:03:26.629 net/memif: not in enabled drivers build config 00:03:26.629 net/mlx4: not in enabled drivers build config 00:03:26.629 net/mlx5: not in enabled drivers build config 00:03:26.629 net/mvneta: not in enabled drivers build config 00:03:26.629 net/mvpp2: not in enabled drivers build config 00:03:26.629 net/netvsc: not in enabled drivers build config 00:03:26.629 net/nfb: not in enabled drivers build config 00:03:26.629 net/nfp: not in enabled drivers build config 00:03:26.629 net/ngbe: not in enabled drivers build config 00:03:26.629 net/null: not in enabled drivers build config 00:03:26.629 net/octeontx: not in enabled drivers build config 00:03:26.629 net/octeon_ep: not in enabled drivers build config 00:03:26.629 net/pcap: not in enabled drivers build config 00:03:26.629 net/pfe: not in enabled drivers build config 00:03:26.629 net/qede: not in enabled drivers build config 00:03:26.629 net/ring: not in enabled drivers build config 00:03:26.629 net/sfc: not in enabled drivers build config 00:03:26.629 net/softnic: not in enabled drivers build config 00:03:26.629 net/tap: not in enabled drivers build config 00:03:26.629 net/thunderx: not in enabled drivers build config 00:03:26.629 net/txgbe: not in enabled drivers build config 00:03:26.629 net/vdev_netvsc: not in enabled drivers build config 00:03:26.629 net/vhost: not in enabled drivers build config 00:03:26.629 net/virtio: not in enabled drivers build config 00:03:26.629 net/vmxnet3: not in enabled drivers build config 00:03:26.629 raw/*: missing internal dependency, "rawdev" 00:03:26.629 crypto/armv8: not in enabled drivers build config 00:03:26.629 crypto/bcmfs: not in enabled drivers build config 00:03:26.629 crypto/caam_jr: not in enabled drivers build config 00:03:26.629 crypto/ccp: not in enabled drivers build config 00:03:26.629 crypto/cnxk: not in enabled drivers build config 00:03:26.629 crypto/dpaa_sec: not in enabled drivers build config 00:03:26.629 crypto/dpaa2_sec: not in enabled drivers build config 00:03:26.629 crypto/ipsec_mb: not in enabled drivers build config 00:03:26.629 crypto/mlx5: not in enabled drivers build config 00:03:26.629 crypto/mvsam: not in enabled drivers build config 00:03:26.629 crypto/nitrox: not in enabled drivers build config 00:03:26.630 crypto/null: not in enabled drivers build config 00:03:26.630 crypto/octeontx: not in enabled drivers build config 00:03:26.630 crypto/openssl: not in enabled drivers build config 00:03:26.630 crypto/scheduler: not in enabled drivers build config 00:03:26.630 crypto/uadk: not in enabled drivers build config 00:03:26.630 crypto/virtio: not in enabled drivers build config 00:03:26.630 compress/isal: not in enabled drivers build config 00:03:26.630 compress/mlx5: not in enabled drivers build config 00:03:26.630 compress/octeontx: not in enabled drivers build config 00:03:26.630 compress/zlib: not in enabled drivers build config 00:03:26.630 regex/*: missing internal dependency, "regexdev" 00:03:26.630 ml/*: missing internal dependency, "mldev" 00:03:26.630 vdpa/ifc: not in enabled drivers build config 00:03:26.630 vdpa/mlx5: not in enabled drivers build config 00:03:26.630 vdpa/nfp: not in enabled drivers build config 00:03:26.630 vdpa/sfc: not in enabled drivers build config 00:03:26.630 event/*: missing internal dependency, "eventdev" 00:03:26.630 baseband/*: missing internal dependency, "bbdev" 00:03:26.630 gpu/*: missing internal dependency, "gpudev" 00:03:26.630 00:03:26.630 00:03:26.888 Build targets in project: 85 00:03:26.888 00:03:26.888 DPDK 23.11.0 00:03:26.888 00:03:26.888 User defined options 00:03:26.888 buildtype : debug 00:03:26.888 default_library : static 00:03:26.888 libdir : lib 00:03:26.888 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:26.888 b_sanitize : address 00:03:26.888 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:03:26.888 c_link_args : 00:03:26.888 cpu_instruction_set: native 00:03:26.888 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:03:26.888 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:03:26.888 enable_docs : false 00:03:26.888 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:26.888 enable_kmods : false 00:03:26.888 tests : false 00:03:26.888 00:03:26.888 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:27.456 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:27.456 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:27.456 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:27.456 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:27.456 [4/265] Linking static target lib/librte_kvargs.a 00:03:27.456 [5/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:27.456 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:27.456 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:27.456 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:27.714 [9/265] Linking static target lib/librte_log.a 00:03:27.714 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:27.714 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:27.714 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:27.973 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:27.973 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:27.973 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:27.973 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:27.973 [17/265] Linking static target lib/librte_telemetry.a 00:03:27.973 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:27.973 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:27.973 [20/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.232 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:28.232 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:28.232 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:28.232 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:28.232 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:28.232 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:28.491 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:28.491 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:28.491 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:28.491 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:28.491 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:28.491 [32/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.491 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:28.491 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:28.491 [35/265] Linking target lib/librte_log.so.24.0 00:03:28.491 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:28.749 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:28.749 [38/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.749 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:28.749 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:28.749 [41/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:28.749 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:28.749 [43/265] Linking target lib/librte_kvargs.so.24.0 00:03:28.749 [44/265] Linking target lib/librte_telemetry.so.24.0 00:03:28.749 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:28.749 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:28.749 [47/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:28.749 [48/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:29.007 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:29.007 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:29.007 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:29.007 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:29.007 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:29.007 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:29.007 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:29.007 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:29.007 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:29.266 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:29.266 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:29.266 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:29.266 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:29.266 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:29.266 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:29.266 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:29.266 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:29.266 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:29.524 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:29.524 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:29.524 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:29.524 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:29.524 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:29.524 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:29.524 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:29.524 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:29.524 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:29.524 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:29.524 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:29.783 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:29.783 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:29.783 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:29.783 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:29.783 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:30.041 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:30.041 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:30.041 [85/265] Linking static target lib/librte_eal.a 00:03:30.041 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:30.041 [87/265] Linking static target lib/librte_ring.a 00:03:30.041 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:30.041 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:30.300 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:30.300 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.300 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:30.300 [93/265] Linking static target lib/librte_mempool.a 00:03:30.300 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:30.300 [95/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:30.300 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:30.300 [97/265] Linking static target lib/librte_rcu.a 00:03:30.559 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:30.559 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:30.559 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:30.559 [101/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.559 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:30.559 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:30.559 [104/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:30.818 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:30.818 [106/265] Linking static target lib/librte_mbuf.a 00:03:30.818 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:30.818 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:30.818 [109/265] Linking static target lib/librte_net.a 00:03:30.818 [110/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:30.818 [111/265] Linking static target lib/librte_meter.a 00:03:31.077 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:31.077 [113/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.077 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:31.077 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:31.077 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.077 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.337 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:31.337 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.596 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:31.596 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:31.596 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:31.596 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:31.855 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:31.855 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:31.855 [126/265] Linking static target lib/librte_pci.a 00:03:31.855 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:31.855 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:31.855 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:31.855 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:31.855 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:31.855 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:31.855 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.132 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:32.132 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:32.132 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:32.132 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:32.132 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:32.132 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:32.132 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:32.132 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:32.132 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:32.132 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:32.422 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:32.422 [145/265] Linking static target lib/librte_cmdline.a 00:03:32.422 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:32.422 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:32.422 [148/265] Linking static target lib/librte_timer.a 00:03:32.422 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:32.681 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:32.681 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:32.681 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:32.940 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.940 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:32.940 [155/265] Linking static target lib/librte_compressdev.a 00:03:32.940 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:32.940 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:33.198 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:33.198 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:33.198 [160/265] Linking static target lib/librte_hash.a 00:03:33.198 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:33.198 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:33.198 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:33.198 [164/265] Linking static target lib/librte_dmadev.a 00:03:33.198 [165/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:33.456 [166/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.456 [167/265] Linking static target lib/librte_ethdev.a 00:03:33.456 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:33.456 [169/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.456 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:33.456 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:33.456 [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.456 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:33.715 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:33.715 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.715 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:33.974 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:33.974 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:33.974 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:33.974 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:33.974 [181/265] Linking static target lib/librte_power.a 00:03:33.974 [182/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:33.974 [183/265] Linking static target lib/librte_cryptodev.a 00:03:34.232 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:34.232 [185/265] Linking static target lib/librte_reorder.a 00:03:34.232 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:34.232 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:34.490 [188/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.490 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:34.490 [190/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:34.490 [191/265] Linking static target lib/librte_security.a 00:03:34.749 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.749 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:35.007 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:35.007 [195/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.007 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:35.007 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:35.266 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:35.266 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:35.266 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:35.266 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:35.266 [202/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.266 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:35.525 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:35.525 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:35.525 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:35.525 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:35.525 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:35.784 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:35.784 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:35.784 [211/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:35.784 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:35.784 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:35.784 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:35.784 [215/265] Linking static target drivers/librte_bus_vdev.a 00:03:35.784 [216/265] Linking static target drivers/librte_bus_pci.a 00:03:35.784 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:35.784 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:36.043 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.043 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:36.043 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.043 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:36.043 [223/265] Linking static target drivers/librte_mempool_ring.a 00:03:36.302 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.677 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.677 [226/265] Linking target lib/librte_eal.so.24.0 00:03:37.935 [227/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:37.935 [228/265] Linking target lib/librte_meter.so.24.0 00:03:37.935 [229/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:37.935 [230/265] Linking target lib/librte_pci.so.24.0 00:03:37.935 [231/265] Linking target lib/librte_timer.so.24.0 00:03:37.935 [232/265] Linking target lib/librte_dmadev.so.24.0 00:03:37.935 [233/265] Linking target lib/librte_ring.so.24.0 00:03:37.935 [234/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:37.935 [235/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:37.935 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:37.935 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:37.935 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:38.193 [239/265] Linking target lib/librte_rcu.so.24.0 00:03:38.193 [240/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:38.193 [241/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:38.193 [242/265] Linking target lib/librte_mempool.so.24.0 00:03:38.193 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:38.193 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:38.193 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:38.193 [246/265] Linking target lib/librte_mbuf.so.24.0 00:03:38.453 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:38.453 [248/265] Linking target lib/librte_compressdev.so.24.0 00:03:38.453 [249/265] Linking target lib/librte_reorder.so.24.0 00:03:38.453 [250/265] Linking target lib/librte_net.so.24.0 00:03:38.453 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:03:38.711 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:38.711 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:38.711 [254/265] Linking target lib/librte_security.so.24.0 00:03:38.711 [255/265] Linking target lib/librte_cmdline.so.24.0 00:03:38.711 [256/265] Linking target lib/librte_hash.so.24.0 00:03:38.711 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.711 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:38.970 [259/265] Linking target lib/librte_ethdev.so.24.0 00:03:38.970 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:39.228 [261/265] Linking target lib/librte_power.so.24.0 00:03:41.161 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:41.161 [263/265] Linking static target lib/librte_vhost.a 00:03:42.535 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.535 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:42.535 INFO: autodetecting backend as ninja 00:03:42.535 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:43.908 CC lib/log/log.o 00:03:43.908 CC lib/ut/ut.o 00:03:43.908 CC lib/log/log_flags.o 00:03:43.908 CC lib/log/log_deprecated.o 00:03:43.908 CC lib/ut_mock/mock.o 00:03:43.908 LIB libspdk_ut_mock.a 00:03:43.908 LIB libspdk_ut.a 00:03:43.908 LIB libspdk_log.a 00:03:44.166 CC lib/util/bit_array.o 00:03:44.166 CC lib/dma/dma.o 00:03:44.166 CC lib/util/base64.o 00:03:44.166 CC lib/util/cpuset.o 00:03:44.166 CC lib/util/crc16.o 00:03:44.166 CC lib/ioat/ioat.o 00:03:44.166 CC lib/util/crc32.o 00:03:44.166 CXX lib/trace_parser/trace.o 00:03:44.166 CC lib/util/crc32c.o 00:03:44.166 CC lib/vfio_user/host/vfio_user_pci.o 00:03:44.166 CC lib/util/crc32_ieee.o 00:03:44.166 CC lib/vfio_user/host/vfio_user.o 00:03:44.166 CC lib/util/crc64.o 00:03:44.166 CC lib/util/dif.o 00:03:44.423 LIB libspdk_dma.a 00:03:44.423 CC lib/util/fd.o 00:03:44.423 CC lib/util/file.o 00:03:44.423 CC lib/util/hexlify.o 00:03:44.423 CC lib/util/iov.o 00:03:44.423 CC lib/util/math.o 00:03:44.423 LIB libspdk_ioat.a 00:03:44.423 CC lib/util/pipe.o 00:03:44.423 CC lib/util/strerror_tls.o 00:03:44.423 CC lib/util/string.o 00:03:44.423 LIB libspdk_vfio_user.a 00:03:44.423 CC lib/util/uuid.o 00:03:44.423 CC lib/util/fd_group.o 00:03:44.423 CC lib/util/xor.o 00:03:44.680 CC lib/util/zipf.o 00:03:44.937 LIB libspdk_util.a 00:03:45.195 LIB libspdk_trace_parser.a 00:03:45.195 CC lib/rdma/common.o 00:03:45.195 CC lib/json/json_parse.o 00:03:45.195 CC lib/json/json_util.o 00:03:45.195 CC lib/json/json_write.o 00:03:45.195 CC lib/rdma/rdma_verbs.o 00:03:45.195 CC lib/vmd/vmd.o 00:03:45.195 CC lib/conf/conf.o 00:03:45.195 CC lib/idxd/idxd.o 00:03:45.195 CC lib/env_dpdk/env.o 00:03:45.195 CC lib/env_dpdk/memory.o 00:03:45.452 CC lib/idxd/idxd_user.o 00:03:45.452 LIB libspdk_conf.a 00:03:45.452 CC lib/vmd/led.o 00:03:45.452 CC lib/env_dpdk/pci.o 00:03:45.452 CC lib/env_dpdk/init.o 00:03:45.452 LIB libspdk_rdma.a 00:03:45.452 LIB libspdk_json.a 00:03:45.452 CC lib/env_dpdk/threads.o 00:03:45.452 CC lib/env_dpdk/pci_ioat.o 00:03:45.452 CC lib/env_dpdk/pci_virtio.o 00:03:45.711 CC lib/env_dpdk/pci_vmd.o 00:03:45.711 CC lib/env_dpdk/pci_idxd.o 00:03:45.711 CC lib/env_dpdk/pci_event.o 00:03:45.711 CC lib/env_dpdk/sigbus_handler.o 00:03:45.711 CC lib/env_dpdk/pci_dpdk.o 00:03:45.711 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:45.711 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:45.711 LIB libspdk_idxd.a 00:03:45.969 LIB libspdk_vmd.a 00:03:45.969 CC lib/jsonrpc/jsonrpc_server.o 00:03:45.969 CC lib/jsonrpc/jsonrpc_client.o 00:03:45.969 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:45.969 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:46.228 LIB libspdk_jsonrpc.a 00:03:46.486 CC lib/rpc/rpc.o 00:03:46.744 LIB libspdk_rpc.a 00:03:46.744 LIB libspdk_env_dpdk.a 00:03:47.002 CC lib/keyring/keyring.o 00:03:47.002 CC lib/keyring/keyring_rpc.o 00:03:47.002 CC lib/notify/notify.o 00:03:47.002 CC lib/notify/notify_rpc.o 00:03:47.002 CC lib/trace/trace.o 00:03:47.002 CC lib/trace/trace_flags.o 00:03:47.002 CC lib/trace/trace_rpc.o 00:03:47.002 LIB libspdk_notify.a 00:03:47.261 LIB libspdk_trace.a 00:03:47.261 LIB libspdk_keyring.a 00:03:47.520 CC lib/thread/thread.o 00:03:47.520 CC lib/thread/iobuf.o 00:03:47.520 CC lib/sock/sock.o 00:03:47.520 CC lib/sock/sock_rpc.o 00:03:48.123 LIB libspdk_sock.a 00:03:48.123 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:48.123 CC lib/nvme/nvme_ctrlr.o 00:03:48.123 CC lib/nvme/nvme_fabric.o 00:03:48.123 CC lib/nvme/nvme_ns_cmd.o 00:03:48.123 CC lib/nvme/nvme_ns.o 00:03:48.123 CC lib/nvme/nvme_pcie_common.o 00:03:48.123 CC lib/nvme/nvme_pcie.o 00:03:48.123 CC lib/nvme/nvme_qpair.o 00:03:48.123 CC lib/nvme/nvme.o 00:03:49.061 CC lib/nvme/nvme_quirks.o 00:03:49.061 CC lib/nvme/nvme_transport.o 00:03:49.061 CC lib/nvme/nvme_discovery.o 00:03:49.061 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.061 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.061 CC lib/nvme/nvme_tcp.o 00:03:49.061 LIB libspdk_thread.a 00:03:49.061 CC lib/nvme/nvme_opal.o 00:03:49.061 CC lib/nvme/nvme_io_msg.o 00:03:49.319 CC lib/nvme/nvme_poll_group.o 00:03:49.319 CC lib/nvme/nvme_zns.o 00:03:49.577 CC lib/accel/accel.o 00:03:49.577 CC lib/blob/blobstore.o 00:03:49.577 CC lib/blob/request.o 00:03:49.577 CC lib/init/json_config.o 00:03:49.577 CC lib/virtio/virtio.o 00:03:49.834 CC lib/init/subsystem.o 00:03:49.834 CC lib/nvme/nvme_stubs.o 00:03:49.834 CC lib/virtio/virtio_vhost_user.o 00:03:49.834 CC lib/virtio/virtio_vfio_user.o 00:03:49.834 CC lib/init/subsystem_rpc.o 00:03:50.093 CC lib/blob/zeroes.o 00:03:50.093 CC lib/blob/blob_bs_dev.o 00:03:50.093 CC lib/init/rpc.o 00:03:50.093 CC lib/virtio/virtio_pci.o 00:03:50.093 CC lib/nvme/nvme_auth.o 00:03:50.093 CC lib/nvme/nvme_cuse.o 00:03:50.093 CC lib/accel/accel_rpc.o 00:03:50.351 CC lib/accel/accel_sw.o 00:03:50.351 LIB libspdk_init.a 00:03:50.351 CC lib/nvme/nvme_rdma.o 00:03:50.351 LIB libspdk_virtio.a 00:03:50.608 CC lib/event/app.o 00:03:50.608 CC lib/event/log_rpc.o 00:03:50.608 CC lib/event/reactor.o 00:03:50.608 CC lib/event/app_rpc.o 00:03:50.608 CC lib/event/scheduler_static.o 00:03:50.608 LIB libspdk_accel.a 00:03:50.866 CC lib/bdev/bdev.o 00:03:50.866 CC lib/bdev/bdev_rpc.o 00:03:50.866 CC lib/bdev/part.o 00:03:50.866 CC lib/bdev/bdev_zone.o 00:03:51.125 CC lib/bdev/scsi_nvme.o 00:03:51.125 LIB libspdk_event.a 00:03:51.692 LIB libspdk_nvme.a 00:03:53.594 LIB libspdk_blob.a 00:03:53.594 CC lib/lvol/lvol.o 00:03:53.594 CC lib/blobfs/blobfs.o 00:03:53.594 CC lib/blobfs/tree.o 00:03:54.161 LIB libspdk_bdev.a 00:03:54.161 CC lib/nbd/nbd.o 00:03:54.161 CC lib/nbd/nbd_rpc.o 00:03:54.161 CC lib/scsi/dev.o 00:03:54.161 CC lib/scsi/lun.o 00:03:54.161 CC lib/scsi/port.o 00:03:54.161 CC lib/scsi/scsi.o 00:03:54.161 CC lib/ftl/ftl_core.o 00:03:54.161 CC lib/nvmf/ctrlr.o 00:03:54.419 CC lib/scsi/scsi_bdev.o 00:03:54.419 CC lib/nvmf/ctrlr_discovery.o 00:03:54.419 CC lib/scsi/scsi_pr.o 00:03:54.678 LIB libspdk_lvol.a 00:03:54.678 LIB libspdk_blobfs.a 00:03:54.678 CC lib/scsi/scsi_rpc.o 00:03:54.678 CC lib/scsi/task.o 00:03:54.678 CC lib/nvmf/ctrlr_bdev.o 00:03:54.678 CC lib/nvmf/subsystem.o 00:03:54.678 CC lib/nvmf/nvmf.o 00:03:54.678 CC lib/ftl/ftl_init.o 00:03:54.678 CC lib/ftl/ftl_layout.o 00:03:54.937 LIB libspdk_nbd.a 00:03:54.937 CC lib/ftl/ftl_debug.o 00:03:54.937 CC lib/ftl/ftl_io.o 00:03:54.937 CC lib/nvmf/nvmf_rpc.o 00:03:54.937 LIB libspdk_scsi.a 00:03:54.937 CC lib/ftl/ftl_sb.o 00:03:55.196 CC lib/ftl/ftl_l2p.o 00:03:55.196 CC lib/ftl/ftl_l2p_flat.o 00:03:55.196 CC lib/ftl/ftl_nv_cache.o 00:03:55.196 CC lib/iscsi/conn.o 00:03:55.196 CC lib/ftl/ftl_band.o 00:03:55.196 CC lib/vhost/vhost.o 00:03:55.455 CC lib/vhost/vhost_rpc.o 00:03:55.455 CC lib/vhost/vhost_scsi.o 00:03:55.713 CC lib/vhost/vhost_blk.o 00:03:55.972 CC lib/vhost/rte_vhost_user.o 00:03:55.972 CC lib/iscsi/init_grp.o 00:03:55.972 CC lib/iscsi/iscsi.o 00:03:55.972 CC lib/nvmf/transport.o 00:03:55.972 CC lib/ftl/ftl_band_ops.o 00:03:55.972 CC lib/nvmf/tcp.o 00:03:56.230 CC lib/nvmf/rdma.o 00:03:56.230 CC lib/ftl/ftl_writer.o 00:03:56.230 CC lib/ftl/ftl_rq.o 00:03:56.230 CC lib/iscsi/md5.o 00:03:56.488 CC lib/iscsi/param.o 00:03:56.488 CC lib/iscsi/portal_grp.o 00:03:56.488 CC lib/iscsi/tgt_node.o 00:03:56.488 CC lib/ftl/ftl_reloc.o 00:03:56.745 CC lib/ftl/ftl_l2p_cache.o 00:03:56.745 CC lib/ftl/ftl_p2l.o 00:03:56.745 CC lib/iscsi/iscsi_subsystem.o 00:03:56.745 CC lib/iscsi/iscsi_rpc.o 00:03:57.004 LIB libspdk_vhost.a 00:03:57.004 CC lib/iscsi/task.o 00:03:57.004 CC lib/ftl/mngt/ftl_mngt.o 00:03:57.004 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:57.004 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:57.262 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:57.262 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:57.262 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:57.262 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:57.262 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:57.262 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:57.262 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:57.523 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:57.523 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:57.523 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:57.523 CC lib/ftl/utils/ftl_conf.o 00:03:57.523 CC lib/ftl/utils/ftl_md.o 00:03:57.523 CC lib/ftl/utils/ftl_mempool.o 00:03:57.523 CC lib/ftl/utils/ftl_bitmap.o 00:03:57.789 LIB libspdk_iscsi.a 00:03:57.789 CC lib/ftl/utils/ftl_property.o 00:03:57.789 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:57.789 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:57.789 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:57.789 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:57.789 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:58.050 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:58.050 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:58.050 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:58.050 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:58.050 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:58.050 CC lib/ftl/base/ftl_base_dev.o 00:03:58.050 CC lib/ftl/base/ftl_base_bdev.o 00:03:58.050 CC lib/ftl/ftl_trace.o 00:03:58.308 LIB libspdk_ftl.a 00:03:58.873 LIB libspdk_nvmf.a 00:03:59.131 CC module/env_dpdk/env_dpdk_rpc.o 00:03:59.131 CC module/keyring/file/keyring.o 00:03:59.131 CC module/blob/bdev/blob_bdev.o 00:03:59.131 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:59.389 CC module/scheduler/gscheduler/gscheduler.o 00:03:59.389 CC module/keyring/linux/keyring.o 00:03:59.389 CC module/accel/error/accel_error.o 00:03:59.389 CC module/accel/ioat/accel_ioat.o 00:03:59.389 CC module/sock/posix/posix.o 00:03:59.389 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:59.389 LIB libspdk_env_dpdk_rpc.a 00:03:59.389 CC module/accel/error/accel_error_rpc.o 00:03:59.389 CC module/keyring/linux/keyring_rpc.o 00:03:59.389 LIB libspdk_scheduler_gscheduler.a 00:03:59.389 CC module/keyring/file/keyring_rpc.o 00:03:59.389 LIB libspdk_scheduler_dpdk_governor.a 00:03:59.389 CC module/accel/ioat/accel_ioat_rpc.o 00:03:59.389 LIB libspdk_scheduler_dynamic.a 00:03:59.647 LIB libspdk_accel_error.a 00:03:59.647 LIB libspdk_blob_bdev.a 00:03:59.647 LIB libspdk_keyring_linux.a 00:03:59.647 LIB libspdk_keyring_file.a 00:03:59.647 CC module/accel/dsa/accel_dsa.o 00:03:59.647 CC module/accel/dsa/accel_dsa_rpc.o 00:03:59.647 CC module/accel/iaa/accel_iaa_rpc.o 00:03:59.647 CC module/accel/iaa/accel_iaa.o 00:03:59.647 LIB libspdk_accel_ioat.a 00:03:59.647 CC module/bdev/delay/vbdev_delay.o 00:03:59.905 CC module/bdev/gpt/gpt.o 00:03:59.905 CC module/blobfs/bdev/blobfs_bdev.o 00:03:59.905 CC module/bdev/error/vbdev_error.o 00:03:59.905 CC module/bdev/lvol/vbdev_lvol.o 00:03:59.905 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:59.905 LIB libspdk_accel_iaa.a 00:03:59.905 LIB libspdk_accel_dsa.a 00:03:59.905 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:59.905 CC module/bdev/gpt/vbdev_gpt.o 00:03:59.905 CC module/bdev/malloc/bdev_malloc.o 00:03:59.905 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:59.905 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:59.905 LIB libspdk_blobfs_bdev.a 00:04:00.164 CC module/bdev/error/vbdev_error_rpc.o 00:04:00.164 LIB libspdk_sock_posix.a 00:04:00.164 LIB libspdk_bdev_delay.a 00:04:00.164 CC module/bdev/nvme/bdev_nvme.o 00:04:00.164 LIB libspdk_bdev_gpt.a 00:04:00.164 CC module/bdev/null/bdev_null.o 00:04:00.422 CC module/bdev/raid/bdev_raid.o 00:04:00.422 LIB libspdk_bdev_error.a 00:04:00.422 CC module/bdev/passthru/vbdev_passthru.o 00:04:00.422 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:00.422 CC module/bdev/split/vbdev_split.o 00:04:00.422 LIB libspdk_bdev_malloc.a 00:04:00.422 LIB libspdk_bdev_lvol.a 00:04:00.422 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:00.422 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:00.422 CC module/bdev/aio/bdev_aio.o 00:04:00.422 CC module/bdev/aio/bdev_aio_rpc.o 00:04:00.680 CC module/bdev/ftl/bdev_ftl.o 00:04:00.680 CC module/bdev/null/bdev_null_rpc.o 00:04:00.680 CC module/bdev/split/vbdev_split_rpc.o 00:04:00.680 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:00.680 LIB libspdk_bdev_passthru.a 00:04:00.680 LIB libspdk_bdev_split.a 00:04:00.680 LIB libspdk_bdev_null.a 00:04:00.938 CC module/bdev/raid/bdev_raid_rpc.o 00:04:00.938 CC module/bdev/raid/bdev_raid_sb.o 00:04:00.938 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:00.938 LIB libspdk_bdev_zone_block.a 00:04:00.938 LIB libspdk_bdev_aio.a 00:04:00.938 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:00.938 CC module/bdev/iscsi/bdev_iscsi.o 00:04:00.938 LIB libspdk_bdev_ftl.a 00:04:00.938 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:00.938 CC module/bdev/raid/raid0.o 00:04:00.938 CC module/bdev/raid/raid1.o 00:04:01.198 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:01.198 CC module/bdev/raid/concat.o 00:04:01.198 CC module/bdev/raid/raid5f.o 00:04:01.198 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:01.198 CC module/bdev/nvme/nvme_rpc.o 00:04:01.198 CC module/bdev/nvme/bdev_mdns_client.o 00:04:01.198 CC module/bdev/nvme/vbdev_opal.o 00:04:01.198 LIB libspdk_bdev_iscsi.a 00:04:01.456 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:01.456 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:01.456 LIB libspdk_bdev_virtio.a 00:04:01.713 LIB libspdk_bdev_raid.a 00:04:03.090 LIB libspdk_bdev_nvme.a 00:04:03.348 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:03.348 CC module/event/subsystems/iobuf/iobuf.o 00:04:03.348 CC module/event/subsystems/vmd/vmd.o 00:04:03.348 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:03.348 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:03.348 CC module/event/subsystems/sock/sock.o 00:04:03.348 CC module/event/subsystems/scheduler/scheduler.o 00:04:03.348 CC module/event/subsystems/keyring/keyring.o 00:04:03.348 LIB libspdk_event_sock.a 00:04:03.607 LIB libspdk_event_keyring.a 00:04:03.607 LIB libspdk_event_vmd.a 00:04:03.607 LIB libspdk_event_vhost_blk.a 00:04:03.607 LIB libspdk_event_scheduler.a 00:04:03.607 LIB libspdk_event_iobuf.a 00:04:03.865 CC module/event/subsystems/accel/accel.o 00:04:03.865 LIB libspdk_event_accel.a 00:04:04.124 CC module/event/subsystems/bdev/bdev.o 00:04:04.381 LIB libspdk_event_bdev.a 00:04:04.638 CC module/event/subsystems/nbd/nbd.o 00:04:04.638 CC module/event/subsystems/scsi/scsi.o 00:04:04.638 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:04.638 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:04.638 LIB libspdk_event_nbd.a 00:04:04.897 LIB libspdk_event_scsi.a 00:04:04.897 LIB libspdk_event_nvmf.a 00:04:04.897 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:04.897 CC module/event/subsystems/iscsi/iscsi.o 00:04:05.155 LIB libspdk_event_vhost_scsi.a 00:04:05.155 LIB libspdk_event_iscsi.a 00:04:05.413 CC app/trace_record/trace_record.o 00:04:05.413 CXX app/trace/trace.o 00:04:05.413 CC examples/nvme/hello_world/hello_world.o 00:04:05.413 CC examples/ioat/perf/perf.o 00:04:05.413 CC examples/vmd/lsvmd/lsvmd.o 00:04:05.413 CC examples/sock/hello_world/hello_sock.o 00:04:05.413 CC examples/accel/perf/accel_perf.o 00:04:05.413 CC examples/bdev/hello_world/hello_bdev.o 00:04:05.413 CC test/accel/dif/dif.o 00:04:05.672 CC examples/blob/hello_world/hello_blob.o 00:04:05.672 LINK lsvmd 00:04:05.672 LINK spdk_trace_record 00:04:05.672 LINK ioat_perf 00:04:05.672 LINK hello_world 00:04:05.930 LINK hello_sock 00:04:05.930 LINK hello_bdev 00:04:05.930 LINK hello_blob 00:04:05.930 LINK spdk_trace 00:04:06.188 LINK dif 00:04:06.188 LINK accel_perf 00:04:06.188 CC examples/nvme/reconnect/reconnect.o 00:04:06.446 CC app/nvmf_tgt/nvmf_main.o 00:04:06.446 CC examples/ioat/verify/verify.o 00:04:06.703 LINK nvmf_tgt 00:04:06.703 LINK reconnect 00:04:06.703 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:06.703 LINK verify 00:04:06.961 CC examples/vmd/led/led.o 00:04:07.219 LINK led 00:04:07.219 CC examples/nvmf/nvmf/nvmf.o 00:04:07.219 LINK nvme_manage 00:04:07.219 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.477 CC app/spdk_tgt/spdk_tgt.o 00:04:07.477 LINK nvmf 00:04:07.740 LINK iscsi_tgt 00:04:07.740 LINK spdk_tgt 00:04:08.306 CC test/app/bdev_svc/bdev_svc.o 00:04:08.306 LINK bdev_svc 00:04:08.306 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:08.564 CC examples/nvme/arbitration/arbitration.o 00:04:08.838 LINK arbitration 00:04:08.838 LINK nvme_fuzz 00:04:08.838 CC examples/bdev/bdevperf/bdevperf.o 00:04:08.838 CC examples/blob/cli/blobcli.o 00:04:09.096 CC test/app/histogram_perf/histogram_perf.o 00:04:09.096 LINK histogram_perf 00:04:09.354 LINK blobcli 00:04:09.919 LINK bdevperf 00:04:09.919 CC test/app/jsoncat/jsoncat.o 00:04:09.919 CC test/app/stub/stub.o 00:04:09.919 CC examples/nvme/hotplug/hotplug.o 00:04:09.919 LINK jsoncat 00:04:10.176 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:10.176 LINK stub 00:04:10.176 LINK hotplug 00:04:10.744 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:10.744 CC examples/util/zipf/zipf.o 00:04:11.002 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:11.002 LINK zipf 00:04:11.002 CC examples/thread/thread/thread_ex.o 00:04:11.261 LINK thread 00:04:11.261 CC examples/idxd/perf/perf.o 00:04:11.521 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:11.521 LINK vhost_fuzz 00:04:11.521 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:11.521 CC app/spdk_lspci/spdk_lspci.o 00:04:11.521 CC app/spdk_nvme_perf/perf.o 00:04:11.521 LINK spdk_lspci 00:04:11.521 LINK cmb_copy 00:04:11.521 LINK interrupt_tgt 00:04:11.780 LINK idxd_perf 00:04:12.038 LINK iscsi_fuzz 00:04:12.297 CC app/spdk_nvme_identify/identify.o 00:04:12.555 CC app/spdk_nvme_discover/discovery_aer.o 00:04:12.555 CC examples/nvme/abort/abort.o 00:04:12.555 LINK spdk_nvme_perf 00:04:12.814 LINK spdk_nvme_discover 00:04:12.814 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:12.814 CC app/spdk_top/spdk_top.o 00:04:13.072 LINK abort 00:04:13.072 CC app/vhost/vhost.o 00:04:13.072 LINK pmr_persistence 00:04:13.394 LINK vhost 00:04:13.394 LINK spdk_nvme_identify 00:04:13.394 CC test/bdev/bdevio/bdevio.o 00:04:13.653 CC test/blobfs/mkfs/mkfs.o 00:04:13.911 LINK mkfs 00:04:13.911 CC app/spdk_dd/spdk_dd.o 00:04:13.911 LINK bdevio 00:04:13.911 CC app/fio/nvme/fio_plugin.o 00:04:13.911 LINK spdk_top 00:04:14.170 CC app/fio/bdev/fio_plugin.o 00:04:14.429 LINK spdk_dd 00:04:14.429 TEST_HEADER include/spdk/accel.h 00:04:14.429 TEST_HEADER include/spdk/accel_module.h 00:04:14.429 TEST_HEADER include/spdk/assert.h 00:04:14.429 TEST_HEADER include/spdk/barrier.h 00:04:14.429 TEST_HEADER include/spdk/base64.h 00:04:14.429 TEST_HEADER include/spdk/bdev.h 00:04:14.429 TEST_HEADER include/spdk/bdev_module.h 00:04:14.429 TEST_HEADER include/spdk/bdev_zone.h 00:04:14.429 TEST_HEADER include/spdk/bit_array.h 00:04:14.429 TEST_HEADER include/spdk/bit_pool.h 00:04:14.429 TEST_HEADER include/spdk/blob.h 00:04:14.429 TEST_HEADER include/spdk/blob_bdev.h 00:04:14.429 TEST_HEADER include/spdk/blobfs.h 00:04:14.429 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:14.429 TEST_HEADER include/spdk/conf.h 00:04:14.429 TEST_HEADER include/spdk/config.h 00:04:14.429 TEST_HEADER include/spdk/cpuset.h 00:04:14.429 TEST_HEADER include/spdk/crc16.h 00:04:14.429 TEST_HEADER include/spdk/crc32.h 00:04:14.429 TEST_HEADER include/spdk/crc64.h 00:04:14.429 TEST_HEADER include/spdk/dif.h 00:04:14.429 TEST_HEADER include/spdk/dma.h 00:04:14.429 TEST_HEADER include/spdk/endian.h 00:04:14.429 TEST_HEADER include/spdk/env.h 00:04:14.429 TEST_HEADER include/spdk/env_dpdk.h 00:04:14.429 TEST_HEADER include/spdk/event.h 00:04:14.429 TEST_HEADER include/spdk/fd.h 00:04:14.429 TEST_HEADER include/spdk/fd_group.h 00:04:14.429 TEST_HEADER include/spdk/file.h 00:04:14.429 TEST_HEADER include/spdk/ftl.h 00:04:14.429 TEST_HEADER include/spdk/gpt_spec.h 00:04:14.429 TEST_HEADER include/spdk/hexlify.h 00:04:14.429 TEST_HEADER include/spdk/histogram_data.h 00:04:14.429 TEST_HEADER include/spdk/idxd.h 00:04:14.429 TEST_HEADER include/spdk/idxd_spec.h 00:04:14.429 TEST_HEADER include/spdk/init.h 00:04:14.429 TEST_HEADER include/spdk/ioat.h 00:04:14.429 TEST_HEADER include/spdk/ioat_spec.h 00:04:14.429 TEST_HEADER include/spdk/iscsi_spec.h 00:04:14.429 TEST_HEADER include/spdk/json.h 00:04:14.429 TEST_HEADER include/spdk/jsonrpc.h 00:04:14.429 TEST_HEADER include/spdk/keyring.h 00:04:14.429 TEST_HEADER include/spdk/keyring_module.h 00:04:14.429 TEST_HEADER include/spdk/likely.h 00:04:14.429 TEST_HEADER include/spdk/log.h 00:04:14.688 TEST_HEADER include/spdk/lvol.h 00:04:14.688 TEST_HEADER include/spdk/memory.h 00:04:14.688 TEST_HEADER include/spdk/mmio.h 00:04:14.688 TEST_HEADER include/spdk/nbd.h 00:04:14.688 CC test/dma/test_dma/test_dma.o 00:04:14.688 TEST_HEADER include/spdk/notify.h 00:04:14.688 TEST_HEADER include/spdk/nvme.h 00:04:14.688 TEST_HEADER include/spdk/nvme_intel.h 00:04:14.688 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:14.688 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:14.688 TEST_HEADER include/spdk/nvme_spec.h 00:04:14.688 TEST_HEADER include/spdk/nvme_zns.h 00:04:14.688 TEST_HEADER include/spdk/nvmf.h 00:04:14.688 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:14.688 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:14.688 TEST_HEADER include/spdk/nvmf_spec.h 00:04:14.688 TEST_HEADER include/spdk/nvmf_transport.h 00:04:14.688 TEST_HEADER include/spdk/opal.h 00:04:14.688 TEST_HEADER include/spdk/opal_spec.h 00:04:14.688 TEST_HEADER include/spdk/pci_ids.h 00:04:14.688 TEST_HEADER include/spdk/pipe.h 00:04:14.688 TEST_HEADER include/spdk/queue.h 00:04:14.688 TEST_HEADER include/spdk/reduce.h 00:04:14.688 TEST_HEADER include/spdk/rpc.h 00:04:14.688 TEST_HEADER include/spdk/scheduler.h 00:04:14.688 TEST_HEADER include/spdk/scsi.h 00:04:14.688 TEST_HEADER include/spdk/scsi_spec.h 00:04:14.688 TEST_HEADER include/spdk/sock.h 00:04:14.688 TEST_HEADER include/spdk/stdinc.h 00:04:14.688 TEST_HEADER include/spdk/string.h 00:04:14.688 CC test/env/vtophys/vtophys.o 00:04:14.688 TEST_HEADER include/spdk/thread.h 00:04:14.688 TEST_HEADER include/spdk/trace.h 00:04:14.688 TEST_HEADER include/spdk/trace_parser.h 00:04:14.688 TEST_HEADER include/spdk/tree.h 00:04:14.688 TEST_HEADER include/spdk/ublk.h 00:04:14.688 TEST_HEADER include/spdk/util.h 00:04:14.688 TEST_HEADER include/spdk/uuid.h 00:04:14.688 TEST_HEADER include/spdk/version.h 00:04:14.688 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:14.688 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:14.688 TEST_HEADER include/spdk/vhost.h 00:04:14.688 TEST_HEADER include/spdk/vmd.h 00:04:14.688 TEST_HEADER include/spdk/xor.h 00:04:14.688 TEST_HEADER include/spdk/zipf.h 00:04:14.688 CXX test/cpp_headers/accel.o 00:04:14.688 CC test/env/mem_callbacks/mem_callbacks.o 00:04:14.688 LINK spdk_nvme 00:04:14.688 LINK vtophys 00:04:14.946 CXX test/cpp_headers/accel_module.o 00:04:14.946 LINK spdk_bdev 00:04:14.946 CXX test/cpp_headers/assert.o 00:04:14.946 LINK test_dma 00:04:15.204 LINK mem_callbacks 00:04:15.204 CXX test/cpp_headers/barrier.o 00:04:15.204 CXX test/cpp_headers/base64.o 00:04:15.463 CXX test/cpp_headers/bdev.o 00:04:15.463 CXX test/cpp_headers/bdev_module.o 00:04:15.463 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:15.721 CXX test/cpp_headers/bdev_zone.o 00:04:15.721 CC test/env/memory/memory_ut.o 00:04:15.721 LINK env_dpdk_post_init 00:04:15.721 CC test/env/pci/pci_ut.o 00:04:15.978 CXX test/cpp_headers/bit_array.o 00:04:15.979 CXX test/cpp_headers/bit_pool.o 00:04:16.237 CXX test/cpp_headers/blob.o 00:04:16.496 LINK pci_ut 00:04:16.496 CXX test/cpp_headers/blob_bdev.o 00:04:16.496 CXX test/cpp_headers/blobfs.o 00:04:16.754 LINK memory_ut 00:04:16.754 CXX test/cpp_headers/blobfs_bdev.o 00:04:16.754 CXX test/cpp_headers/conf.o 00:04:16.754 CXX test/cpp_headers/config.o 00:04:16.754 CXX test/cpp_headers/cpuset.o 00:04:16.754 CXX test/cpp_headers/crc16.o 00:04:17.013 CXX test/cpp_headers/crc32.o 00:04:17.013 CC test/rpc_client/rpc_client_test.o 00:04:17.013 CC test/event/event_perf/event_perf.o 00:04:17.013 CC test/nvme/aer/aer.o 00:04:17.013 CC test/thread/poller_perf/poller_perf.o 00:04:17.013 CC test/lvol/esnap/esnap.o 00:04:17.013 CXX test/cpp_headers/crc64.o 00:04:17.272 LINK event_perf 00:04:17.272 LINK rpc_client_test 00:04:17.272 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:04:17.272 LINK poller_perf 00:04:17.272 CXX test/cpp_headers/dif.o 00:04:17.272 LINK aer 00:04:17.531 LINK histogram_ut 00:04:17.531 CC test/unit/lib/accel/accel.c/accel_ut.o 00:04:17.531 CXX test/cpp_headers/dma.o 00:04:17.531 CXX test/cpp_headers/endian.o 00:04:17.841 CC test/thread/lock/spdk_lock.o 00:04:17.841 CXX test/cpp_headers/env.o 00:04:17.841 CXX test/cpp_headers/env_dpdk.o 00:04:17.841 CXX test/cpp_headers/event.o 00:04:17.841 CC test/nvme/reset/reset.o 00:04:18.100 CC test/nvme/sgl/sgl.o 00:04:18.100 CC test/event/reactor/reactor.o 00:04:18.100 CC test/event/reactor_perf/reactor_perf.o 00:04:18.100 CXX test/cpp_headers/fd.o 00:04:18.100 CXX test/cpp_headers/fd_group.o 00:04:18.100 LINK reactor 00:04:18.100 LINK reset 00:04:18.100 LINK reactor_perf 00:04:18.100 CXX test/cpp_headers/file.o 00:04:18.358 LINK sgl 00:04:18.358 CC test/event/app_repeat/app_repeat.o 00:04:18.358 CXX test/cpp_headers/ftl.o 00:04:18.358 LINK app_repeat 00:04:18.617 CC test/event/scheduler/scheduler.o 00:04:18.617 CXX test/cpp_headers/gpt_spec.o 00:04:18.875 LINK scheduler 00:04:18.875 CXX test/cpp_headers/hexlify.o 00:04:18.875 CXX test/cpp_headers/histogram_data.o 00:04:18.875 CXX test/cpp_headers/idxd.o 00:04:19.132 CXX test/cpp_headers/idxd_spec.o 00:04:19.133 CXX test/cpp_headers/init.o 00:04:19.133 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:04:19.133 CXX test/cpp_headers/ioat.o 00:04:19.391 CC test/nvme/e2edp/nvme_dp.o 00:04:19.391 CXX test/cpp_headers/ioat_spec.o 00:04:19.391 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:19.391 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:19.391 CXX test/cpp_headers/iscsi_spec.o 00:04:19.649 LINK spdk_lock 00:04:19.649 LINK tree_ut 00:04:19.649 LINK nvme_dp 00:04:19.649 CXX test/cpp_headers/json.o 00:04:19.649 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:19.906 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:19.906 CXX test/cpp_headers/jsonrpc.o 00:04:19.906 LINK accel_ut 00:04:19.906 CXX test/cpp_headers/keyring.o 00:04:19.906 LINK blob_bdev_ut 00:04:20.164 LINK dma_ut 00:04:20.164 CXX test/cpp_headers/keyring_module.o 00:04:20.422 CXX test/cpp_headers/likely.o 00:04:20.422 CXX test/cpp_headers/log.o 00:04:20.422 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:20.422 CC test/unit/lib/event/app.c/app_ut.o 00:04:20.422 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:20.680 CXX test/cpp_headers/lvol.o 00:04:20.680 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:20.999 CXX test/cpp_headers/memory.o 00:04:20.999 CC test/nvme/overhead/overhead.o 00:04:20.999 CXX test/cpp_headers/mmio.o 00:04:21.271 LINK ioat_ut 00:04:21.271 CXX test/cpp_headers/nbd.o 00:04:21.271 LINK app_ut 00:04:21.271 LINK overhead 00:04:21.271 LINK blobfs_async_ut 00:04:21.271 CXX test/cpp_headers/notify.o 00:04:21.271 CXX test/cpp_headers/nvme.o 00:04:21.271 LINK reactor_ut 00:04:21.529 CC test/nvme/err_injection/err_injection.o 00:04:21.529 CC test/nvme/startup/startup.o 00:04:21.530 CXX test/cpp_headers/nvme_intel.o 00:04:21.530 LINK err_injection 00:04:21.530 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:21.788 CXX test/cpp_headers/nvme_ocssd.o 00:04:21.788 LINK startup 00:04:21.788 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:21.788 CC test/nvme/reserve/reserve.o 00:04:21.788 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:22.046 LINK blobfs_bdev_ut 00:04:22.046 CXX test/cpp_headers/nvme_spec.o 00:04:22.046 LINK reserve 00:04:22.305 CC test/nvme/simple_copy/simple_copy.o 00:04:22.305 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:22.563 LINK simple_copy 00:04:22.563 LINK esnap 00:04:22.563 CXX test/cpp_headers/nvme_zns.o 00:04:22.822 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:22.822 CXX test/cpp_headers/nvmf.o 00:04:22.822 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:23.081 CXX test/cpp_headers/nvmf_cmd.o 00:04:23.081 LINK blobfs_sync_ut 00:04:23.081 LINK init_grp_ut 00:04:23.081 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:23.339 CC test/nvme/connect_stress/connect_stress.o 00:04:23.339 CXX test/cpp_headers/nvmf_spec.o 00:04:23.340 CXX test/cpp_headers/nvmf_transport.o 00:04:23.340 CC test/nvme/boot_partition/boot_partition.o 00:04:23.340 LINK connect_stress 00:04:23.340 CXX test/cpp_headers/opal.o 00:04:23.340 LINK conn_ut 00:04:23.599 LINK boot_partition 00:04:23.599 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:23.599 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:23.599 CXX test/cpp_headers/opal_spec.o 00:04:23.857 CXX test/cpp_headers/pci_ids.o 00:04:23.857 CC test/unit/lib/bdev/part.c/part_ut.o 00:04:23.857 CXX test/cpp_headers/pipe.o 00:04:24.116 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:24.116 CXX test/cpp_headers/queue.o 00:04:24.116 LINK param_ut 00:04:24.116 CXX test/cpp_headers/reduce.o 00:04:24.116 LINK portal_grp_ut 00:04:24.374 CXX test/cpp_headers/rpc.o 00:04:24.374 CXX test/cpp_headers/scheduler.o 00:04:24.374 CXX test/cpp_headers/scsi.o 00:04:24.632 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:24.632 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:24.632 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:24.632 CXX test/cpp_headers/scsi_spec.o 00:04:24.632 CC test/nvme/compliance/nvme_compliance.o 00:04:24.897 LINK scsi_nvme_ut 00:04:24.897 CXX test/cpp_headers/sock.o 00:04:24.897 LINK bdev_ut 00:04:24.897 LINK jsonrpc_server_ut 00:04:24.897 CXX test/cpp_headers/stdinc.o 00:04:24.897 LINK gpt_ut 00:04:25.157 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:25.157 LINK nvme_compliance 00:04:25.157 CXX test/cpp_headers/string.o 00:04:25.415 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:25.415 CXX test/cpp_headers/thread.o 00:04:25.415 CC test/unit/lib/log/log.c/log_ut.o 00:04:25.415 LINK iscsi_ut 00:04:25.415 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:25.415 CXX test/cpp_headers/trace.o 00:04:25.674 LINK log_ut 00:04:25.674 CXX test/cpp_headers/trace_parser.o 00:04:25.932 CXX test/cpp_headers/tree.o 00:04:25.932 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:25.932 CXX test/cpp_headers/ublk.o 00:04:25.932 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:25.932 CXX test/cpp_headers/util.o 00:04:26.190 LINK vbdev_lvol_ut 00:04:26.190 CXX test/cpp_headers/uuid.o 00:04:26.190 CXX test/cpp_headers/version.o 00:04:26.190 CC test/nvme/fused_ordering/fused_ordering.o 00:04:26.449 CXX test/cpp_headers/vfio_user_pci.o 00:04:26.449 CXX test/cpp_headers/vfio_user_spec.o 00:04:26.449 LINK json_util_ut 00:04:26.449 LINK fused_ordering 00:04:26.708 CXX test/cpp_headers/vhost.o 00:04:26.708 LINK json_parse_ut 00:04:26.708 LINK tgt_node_ut 00:04:26.708 CXX test/cpp_headers/vmd.o 00:04:26.708 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:26.966 CXX test/cpp_headers/xor.o 00:04:26.966 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:26.966 CXX test/cpp_headers/zipf.o 00:04:26.966 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:26.966 LINK notify_ut 00:04:27.224 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:27.224 CC test/nvme/fdp/fdp.o 00:04:27.224 LINK lvol_ut 00:04:27.224 LINK doorbell_aers 00:04:27.224 CC test/nvme/cuse/cuse.o 00:04:27.481 LINK fdp 00:04:27.481 LINK part_ut 00:04:27.740 LINK json_write_ut 00:04:27.740 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:27.740 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:27.998 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:27.998 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:27.998 LINK blob_ut 00:04:28.565 LINK cuse 00:04:28.565 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:28.565 LINK bdev_raid_sb_ut 00:04:28.565 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:28.565 LINK concat_ut 00:04:28.831 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:28.831 LINK bdev_zone_ut 00:04:28.831 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:28.831 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:29.089 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:29.089 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:29.089 LINK nvme_ut 00:04:29.089 LINK bdev_ut 00:04:29.089 LINK bdev_raid_ut 00:04:29.089 LINK raid1_ut 00:04:29.347 LINK vbdev_zone_block_ut 00:04:29.347 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:29.604 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:29.604 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:29.604 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:29.604 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:30.578 LINK ctrlr_bdev_ut 00:04:30.578 LINK raid5f_ut 00:04:30.859 LINK nvmf_ut 00:04:30.859 LINK nvme_ctrlr_ut 00:04:30.859 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:31.116 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:31.117 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:31.117 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:31.117 LINK ctrlr_discovery_ut 00:04:31.374 LINK subsystem_ut 00:04:31.374 LINK dev_ut 00:04:31.374 LINK scsi_ut 00:04:31.632 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:31.632 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:31.632 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:31.632 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:31.890 LINK lun_ut 00:04:31.890 LINK ctrlr_ut 00:04:32.149 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:32.407 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:32.666 LINK nvme_ns_ut 00:04:32.666 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:32.666 LINK nvme_ctrlr_cmd_ut 00:04:32.924 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:32.924 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:33.182 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:33.182 LINK rdma_ut 00:04:33.182 LINK tcp_ut 00:04:33.182 LINK bdev_nvme_ut 00:04:33.440 LINK scsi_bdev_ut 00:04:33.440 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:33.440 LINK nvme_ns_ocssd_cmd_ut 00:04:33.440 LINK nvme_ns_cmd_ut 00:04:33.698 LINK scsi_pr_ut 00:04:33.698 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:33.698 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:33.698 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:33.698 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:33.698 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:33.956 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:33.956 LINK nvme_pcie_ut 00:04:34.215 LINK posix_ut 00:04:34.215 LINK nvme_quirks_ut 00:04:34.474 LINK nvme_poll_group_ut 00:04:34.474 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:34.474 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:34.474 LINK iobuf_ut 00:04:34.474 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:34.474 LINK sock_ut 00:04:34.731 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:34.731 LINK nvme_qpair_ut 00:04:34.731 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:34.990 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:34.990 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:35.249 LINK nvme_io_msg_ut 00:04:35.249 LINK base64_ut 00:04:35.249 LINK nvme_transport_ut 00:04:35.508 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:35.508 LINK nvme_fabric_ut 00:04:35.508 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:35.508 LINK nvme_opal_ut 00:04:35.508 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:35.771 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:04:35.771 LINK pci_event_ut 00:04:35.771 LINK bit_array_ut 00:04:35.771 LINK nvme_pcie_common_ut 00:04:36.041 LINK thread_ut 00:04:36.041 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:36.041 LINK rpc_ut 00:04:36.041 LINK subsystem_ut 00:04:36.041 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:36.298 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:04:36.298 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:36.298 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:36.298 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:36.298 LINK cpuset_ut 00:04:36.298 LINK rpc_ut 00:04:36.556 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:36.556 LINK nvme_tcp_ut 00:04:36.556 LINK crc16_ut 00:04:36.556 LINK keyring_ut 00:04:36.556 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:36.815 LINK transport_ut 00:04:36.815 LINK crc32_ieee_ut 00:04:36.815 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:36.815 LINK idxd_user_ut 00:04:36.815 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:36.815 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:36.815 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:37.074 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:37.074 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:37.074 LINK ftl_l2p_ut 00:04:37.074 LINK crc32c_ut 00:04:37.074 LINK nvme_rdma_ut 00:04:37.074 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:37.333 LINK idxd_ut 00:04:37.333 LINK crc64_ut 00:04:37.333 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:37.333 LINK common_ut 00:04:37.333 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:37.592 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:37.592 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:37.592 LINK ftl_bitmap_ut 00:04:37.592 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:37.592 LINK nvme_cuse_ut 00:04:37.592 CC test/unit/lib/util/math.c/math_ut.o 00:04:37.852 LINK ftl_mempool_ut 00:04:37.852 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:37.852 LINK math_ut 00:04:37.852 LINK iov_ut 00:04:37.852 LINK ftl_io_ut 00:04:37.852 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:38.110 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:38.110 LINK ftl_band_ut 00:04:38.110 LINK ftl_mngt_ut 00:04:38.110 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:38.110 CC test/unit/lib/util/string.c/string_ut.o 00:04:38.367 LINK dif_ut 00:04:38.367 LINK xor_ut 00:04:38.367 LINK pipe_ut 00:04:38.625 LINK string_ut 00:04:38.625 LINK vhost_ut 00:04:39.191 LINK ftl_sb_ut 00:04:39.191 LINK ftl_layout_upgrade_ut 00:04:39.449 ************************************ 00:04:39.449 END TEST unittest_build 00:04:39.449 ************************************ 00:04:39.449 00:04:39.449 real 1m56.678s 00:04:39.449 user 10m8.669s 00:04:39.449 sys 1m58.170s 00:04:39.449 00:24:12 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:04:39.449 00:24:12 -- common/autotest_common.sh@10 -- $ set +x 00:04:39.449 00:24:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:39.449 00:24:12 -- pm/common@30 -- $ signal_monitor_resources TERM 00:04:39.449 00:24:12 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:04:39.449 00:24:12 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.449 00:24:12 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:39.449 00:24:13 -- pm/common@45 -- $ pid=2146 00:04:39.449 00:24:13 -- pm/common@52 -- $ sudo kill -TERM 2146 00:04:39.449 00:24:13 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.449 00:24:13 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:39.449 00:24:13 -- pm/common@45 -- $ pid=2145 00:04:39.449 00:24:13 -- pm/common@52 -- $ sudo kill -TERM 2145 00:04:39.709 00:24:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.709 00:24:13 -- nvmf/common.sh@7 -- # uname -s 00:04:39.709 00:24:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.709 00:24:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.709 00:24:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.709 00:24:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.709 00:24:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.709 00:24:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.709 00:24:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.709 00:24:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.709 00:24:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.709 00:24:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.709 00:24:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:636c5f59-6544-4154-9389-06ea88a15810 00:04:39.709 00:24:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=636c5f59-6544-4154-9389-06ea88a15810 00:04:39.709 00:24:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.709 00:24:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.709 00:24:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.709 00:24:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.709 00:24:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.709 00:24:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.709 00:24:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.709 00:24:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.709 00:24:13 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:39.709 00:24:13 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:39.709 00:24:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:39.709 00:24:13 -- paths/export.sh@5 -- # export PATH 00:04:39.709 00:24:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:39.709 00:24:13 -- nvmf/common.sh@47 -- # : 0 00:04:39.709 00:24:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.709 00:24:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.709 00:24:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.709 00:24:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.709 00:24:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.709 00:24:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.709 00:24:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.709 00:24:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.709 00:24:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:39.709 00:24:13 -- spdk/autotest.sh@32 -- # uname -s 00:04:39.709 00:24:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:39.709 00:24:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:39.709 00:24:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.709 00:24:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:39.709 00:24:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:39.709 00:24:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:39.709 00:24:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:39.709 00:24:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:39.709 00:24:13 -- spdk/autotest.sh@48 -- # udevadm_pid=99301 00:04:39.709 00:24:13 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:39.709 00:24:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:39.709 00:24:13 -- pm/common@17 -- # local monitor 00:04:39.709 00:24:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.709 00:24:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=99304 00:04:39.709 00:24:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.709 00:24:13 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=99308 00:04:39.709 00:24:13 -- pm/common@26 -- # sleep 1 00:04:39.709 00:24:13 -- pm/common@21 -- # date +%s 00:04:39.709 00:24:13 -- pm/common@21 -- # date +%s 00:04:39.710 00:24:13 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714177453 00:04:39.710 00:24:13 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714177453 00:04:39.710 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714177453_collect-vmstat.pm.log 00:04:39.710 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714177453_collect-cpu-load.pm.log 00:04:40.667 00:24:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:40.667 00:24:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:40.667 00:24:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:40.667 00:24:14 -- common/autotest_common.sh@10 -- # set +x 00:04:40.667 00:24:14 -- spdk/autotest.sh@59 -- # create_test_list 00:04:40.667 00:24:14 -- common/autotest_common.sh@734 -- # xtrace_disable 00:04:40.667 00:24:14 -- common/autotest_common.sh@10 -- # set +x 00:04:40.667 00:24:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:40.667 00:24:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:40.667 00:24:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:40.667 00:24:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:40.667 00:24:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:40.667 00:24:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:40.954 00:24:14 -- common/autotest_common.sh@1441 -- # uname 00:04:40.954 00:24:14 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:04:40.954 00:24:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:40.954 00:24:14 -- common/autotest_common.sh@1461 -- # uname 00:04:40.954 00:24:14 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:04:40.954 00:24:14 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:40.954 00:24:14 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:40.954 00:24:14 -- spdk/autotest.sh@72 -- # hash lcov 00:04:40.954 00:24:14 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:40.954 00:24:14 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:40.954 --rc lcov_branch_coverage=1 00:04:40.954 --rc lcov_function_coverage=1 00:04:40.954 --rc genhtml_branch_coverage=1 00:04:40.954 --rc genhtml_function_coverage=1 00:04:40.954 --rc genhtml_legend=1 00:04:40.954 --rc geninfo_all_blocks=1 00:04:40.954 ' 00:04:40.954 00:24:14 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:40.954 --rc lcov_branch_coverage=1 00:04:40.954 --rc lcov_function_coverage=1 00:04:40.954 --rc genhtml_branch_coverage=1 00:04:40.954 --rc genhtml_function_coverage=1 00:04:40.954 --rc genhtml_legend=1 00:04:40.954 --rc geninfo_all_blocks=1 00:04:40.954 ' 00:04:40.954 00:24:14 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:40.954 --rc lcov_branch_coverage=1 00:04:40.954 --rc lcov_function_coverage=1 00:04:40.954 --rc genhtml_branch_coverage=1 00:04:40.954 --rc genhtml_function_coverage=1 00:04:40.954 --rc genhtml_legend=1 00:04:40.954 --rc geninfo_all_blocks=1 00:04:40.954 --no-external' 00:04:40.954 00:24:14 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:40.954 --rc lcov_branch_coverage=1 00:04:40.954 --rc lcov_function_coverage=1 00:04:40.954 --rc genhtml_branch_coverage=1 00:04:40.954 --rc genhtml_function_coverage=1 00:04:40.954 --rc genhtml_legend=1 00:04:40.954 --rc geninfo_all_blocks=1 00:04:40.954 --no-external' 00:04:40.954 00:24:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:40.954 lcov: LCOV version 1.15 00:04:40.954 00:24:14 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:47.519 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:47.519 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:59.721 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:59.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:59.721 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:59.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:59.721 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:59.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:26.264 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:26.264 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:26.265 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:26.265 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:26.266 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:26.266 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:27.202 00:25:00 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:27.203 00:25:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.203 00:25:00 -- common/autotest_common.sh@10 -- # set +x 00:05:27.203 00:25:00 -- spdk/autotest.sh@91 -- # rm -f 00:05:27.203 00:25:00 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:27.770 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:27.770 00:25:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:27.770 00:25:01 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:27.770 00:25:01 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:27.770 00:25:01 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:27.770 00:25:01 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:27.770 00:25:01 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:27.770 00:25:01 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:27.770 00:25:01 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.770 00:25:01 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:27.770 00:25:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:27.770 00:25:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:27.770 00:25:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:27.771 00:25:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:27.771 00:25:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:27.771 00:25:01 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:27.771 No valid GPT data, bailing 00:05:27.771 00:25:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:27.771 00:25:01 -- scripts/common.sh@391 -- # pt= 00:05:27.771 00:25:01 -- scripts/common.sh@392 -- # return 1 00:05:27.771 00:25:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:27.771 1+0 records in 00:05:27.771 1+0 records out 00:05:27.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489712 s, 214 MB/s 00:05:27.771 00:25:01 -- spdk/autotest.sh@118 -- # sync 00:05:27.771 00:25:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:27.771 00:25:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:27.771 00:25:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:29.148 00:25:02 -- spdk/autotest.sh@124 -- # uname -s 00:05:29.148 00:25:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:29.148 00:25:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:29.148 00:25:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.148 00:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.148 00:25:02 -- common/autotest_common.sh@10 -- # set +x 00:05:29.407 ************************************ 00:05:29.407 START TEST setup.sh 00:05:29.407 ************************************ 00:05:29.407 00:25:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:29.407 * Looking for test storage... 00:05:29.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.407 00:25:02 -- setup/test-setup.sh@10 -- # uname -s 00:05:29.407 00:25:02 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:29.407 00:25:02 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:29.407 00:25:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.407 00:25:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.407 00:25:02 -- common/autotest_common.sh@10 -- # set +x 00:05:29.407 ************************************ 00:05:29.407 START TEST acl 00:05:29.407 ************************************ 00:05:29.407 00:25:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:29.407 * Looking for test storage... 00:05:29.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.407 00:25:02 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:29.407 00:25:02 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:29.407 00:25:02 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:29.407 00:25:02 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:29.407 00:25:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:29.407 00:25:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:29.407 00:25:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:29.407 00:25:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.407 00:25:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:29.407 00:25:02 -- setup/acl.sh@12 -- # devs=() 00:05:29.407 00:25:02 -- setup/acl.sh@12 -- # declare -a devs 00:05:29.407 00:25:02 -- setup/acl.sh@13 -- # drivers=() 00:05:29.407 00:25:02 -- setup/acl.sh@13 -- # declare -A drivers 00:05:29.407 00:25:02 -- setup/acl.sh@51 -- # setup reset 00:05:29.407 00:25:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.407 00:25:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.974 00:25:03 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:29.974 00:25:03 -- setup/acl.sh@16 -- # local dev driver 00:05:29.974 00:25:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:29.974 00:25:03 -- setup/acl.sh@15 -- # setup output status 00:05:29.974 00:25:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.974 00:25:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:30.234 00:25:03 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:30.234 00:25:03 -- setup/acl.sh@19 -- # continue 00:05:30.234 00:25:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.234 Hugepages 00:05:30.234 node hugesize free / total 00:05:30.234 00:25:03 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:30.234 00:25:03 -- setup/acl.sh@19 -- # continue 00:05:30.234 00:25:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.234 00:05:30.234 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.234 00:25:03 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:30.234 00:25:03 -- setup/acl.sh@19 -- # continue 00:05:30.234 00:25:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.493 00:25:03 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:30.493 00:25:03 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:30.493 00:25:03 -- setup/acl.sh@20 -- # continue 00:05:30.493 00:25:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.493 00:25:03 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:30.493 00:25:03 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:30.493 00:25:03 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:30.493 00:25:03 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:30.493 00:25:03 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:30.493 00:25:03 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.493 00:25:03 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:30.493 00:25:03 -- setup/acl.sh@54 -- # run_test denied denied 00:05:30.493 00:25:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.493 00:25:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.493 00:25:03 -- common/autotest_common.sh@10 -- # set +x 00:05:30.493 ************************************ 00:05:30.493 START TEST denied 00:05:30.493 ************************************ 00:05:30.493 00:25:04 -- common/autotest_common.sh@1111 -- # denied 00:05:30.493 00:25:04 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:30.493 00:25:04 -- setup/acl.sh@38 -- # setup output config 00:05:30.493 00:25:04 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:30.493 00:25:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.493 00:25:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.402 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:32.402 00:25:05 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:32.402 00:25:05 -- setup/acl.sh@28 -- # local dev driver 00:05:32.402 00:25:05 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:32.402 00:25:05 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:32.402 00:25:05 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:32.402 00:25:05 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:32.402 00:25:05 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:32.402 00:25:05 -- setup/acl.sh@41 -- # setup reset 00:05:32.402 00:25:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:32.402 00:25:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.402 00:05:32.402 real 0m1.870s 00:05:32.402 user 0m0.478s 00:05:32.402 sys 0m1.450s 00:05:32.402 ************************************ 00:05:32.402 00:25:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.402 00:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:32.402 END TEST denied 00:05:32.402 ************************************ 00:05:32.402 00:25:05 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:32.402 00:25:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.402 00:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.402 00:25:05 -- common/autotest_common.sh@10 -- # set +x 00:05:32.660 ************************************ 00:05:32.660 START TEST allowed 00:05:32.660 ************************************ 00:05:32.660 00:25:05 -- common/autotest_common.sh@1111 -- # allowed 00:05:32.660 00:25:06 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:32.660 00:25:06 -- setup/acl.sh@45 -- # setup output config 00:05:32.660 00:25:06 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:32.660 00:25:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.660 00:25:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.042 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.042 00:25:07 -- setup/acl.sh@47 -- # verify 00:05:34.042 00:25:07 -- setup/acl.sh@28 -- # local dev driver 00:05:34.042 00:25:07 -- setup/acl.sh@48 -- # setup reset 00:05:34.042 00:25:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:34.042 00:25:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:34.610 00:05:34.610 real 0m2.008s 00:05:34.610 user 0m0.419s 00:05:34.610 sys 0m1.576s 00:05:34.610 00:25:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.610 00:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.610 ************************************ 00:05:34.610 END TEST allowed 00:05:34.610 ************************************ 00:05:34.610 00:05:34.610 real 0m5.170s 00:05:34.610 user 0m1.656s 00:05:34.610 sys 0m3.626s 00:05:34.610 00:25:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.610 ************************************ 00:05:34.610 END TEST acl 00:05:34.610 00:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.610 ************************************ 00:05:34.610 00:25:08 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:34.610 00:25:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.610 00:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.610 00:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.610 ************************************ 00:05:34.610 START TEST hugepages 00:05:34.610 ************************************ 00:05:34.610 00:25:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:34.869 * Looking for test storage... 00:05:34.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:34.869 00:25:08 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:34.869 00:25:08 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:34.869 00:25:08 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:34.869 00:25:08 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:34.869 00:25:08 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:34.869 00:25:08 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:34.870 00:25:08 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:34.870 00:25:08 -- setup/common.sh@18 -- # local node= 00:05:34.870 00:25:08 -- setup/common.sh@19 -- # local var val 00:05:34.870 00:25:08 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.870 00:25:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.870 00:25:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.870 00:25:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.870 00:25:08 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.870 00:25:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 2883280 kB' 'MemAvailable: 7405716 kB' 'Buffers: 35540 kB' 'Cached: 4623248 kB' 'SwapCached: 0 kB' 'Active: 1012832 kB' 'Inactive: 3767720 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 132640 kB' 'Active(file): 1011784 kB' 'Inactive(file): 3635080 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 464 kB' 'Writeback: 44 kB' 'AnonPages: 151556 kB' 'Mapped: 68624 kB' 'Shmem: 2596 kB' 'KReclaimable: 197088 kB' 'Slab: 261828 kB' 'SReclaimable: 197088 kB' 'SUnreclaim: 64740 kB' 'KernelStack: 4548 kB' 'PageTables: 4236 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024332 kB' 'Committed_AS: 509264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.870 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.870 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # continue 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.871 00:25:08 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.871 00:25:08 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:34.871 00:25:08 -- setup/common.sh@33 -- # echo 2048 00:05:34.871 00:25:08 -- setup/common.sh@33 -- # return 0 00:05:34.871 00:25:08 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:34.871 00:25:08 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:34.871 00:25:08 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:34.871 00:25:08 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:34.871 00:25:08 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:34.871 00:25:08 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:34.871 00:25:08 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:34.871 00:25:08 -- setup/hugepages.sh@207 -- # get_nodes 00:05:34.871 00:25:08 -- setup/hugepages.sh@27 -- # local node 00:05:34.871 00:25:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.871 00:25:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:34.871 00:25:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.871 00:25:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.871 00:25:08 -- setup/hugepages.sh@208 -- # clear_hp 00:05:34.871 00:25:08 -- setup/hugepages.sh@37 -- # local node hp 00:05:34.871 00:25:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:34.871 00:25:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:34.871 00:25:08 -- setup/hugepages.sh@41 -- # echo 0 00:05:34.871 00:25:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:34.871 00:25:08 -- setup/hugepages.sh@41 -- # echo 0 00:05:34.871 00:25:08 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:34.871 00:25:08 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:34.871 00:25:08 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:34.871 00:25:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.871 00:25:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.871 00:25:08 -- common/autotest_common.sh@10 -- # set +x 00:05:34.871 ************************************ 00:05:34.871 START TEST default_setup 00:05:34.871 ************************************ 00:05:34.871 00:25:08 -- common/autotest_common.sh@1111 -- # default_setup 00:05:34.871 00:25:08 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:34.871 00:25:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:34.871 00:25:08 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:34.871 00:25:08 -- setup/hugepages.sh@51 -- # shift 00:05:34.871 00:25:08 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:34.871 00:25:08 -- setup/hugepages.sh@52 -- # local node_ids 00:05:34.871 00:25:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.871 00:25:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:34.871 00:25:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:34.871 00:25:08 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:34.871 00:25:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.871 00:25:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:34.871 00:25:08 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.871 00:25:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.871 00:25:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.871 00:25:08 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:34.871 00:25:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:34.871 00:25:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:34.871 00:25:08 -- setup/hugepages.sh@73 -- # return 0 00:05:34.871 00:25:08 -- setup/hugepages.sh@137 -- # setup output 00:05:34.871 00:25:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.871 00:25:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:35.388 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.958 00:25:09 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:35.958 00:25:09 -- setup/hugepages.sh@89 -- # local node 00:05:35.958 00:25:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.958 00:25:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.958 00:25:09 -- setup/hugepages.sh@92 -- # local surp 00:05:35.958 00:25:09 -- setup/hugepages.sh@93 -- # local resv 00:05:35.958 00:25:09 -- setup/hugepages.sh@94 -- # local anon 00:05:35.958 00:25:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.958 00:25:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.958 00:25:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.958 00:25:09 -- setup/common.sh@18 -- # local node= 00:05:35.958 00:25:09 -- setup/common.sh@19 -- # local var val 00:05:35.958 00:25:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.958 00:25:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.958 00:25:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.958 00:25:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.958 00:25:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.958 00:25:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969852 kB' 'MemAvailable: 9492364 kB' 'Buffers: 35540 kB' 'Cached: 4623092 kB' 'SwapCached: 0 kB' 'Active: 1012888 kB' 'Inactive: 3783368 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 148224 kB' 'Active(file): 1011828 kB' 'Inactive(file): 3635144 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 548 kB' 'Writeback: 0 kB' 'AnonPages: 167088 kB' 'Mapped: 68340 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261612 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64556 kB' 'KernelStack: 4428 kB' 'PageTables: 3928 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.958 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.958 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.959 00:25:09 -- setup/common.sh@33 -- # echo 0 00:05:35.959 00:25:09 -- setup/common.sh@33 -- # return 0 00:05:35.959 00:25:09 -- setup/hugepages.sh@97 -- # anon=0 00:05:35.959 00:25:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.959 00:25:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.959 00:25:09 -- setup/common.sh@18 -- # local node= 00:05:35.959 00:25:09 -- setup/common.sh@19 -- # local var val 00:05:35.959 00:25:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.959 00:25:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.959 00:25:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.959 00:25:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.959 00:25:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.959 00:25:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969600 kB' 'MemAvailable: 9492116 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012872 kB' 'Inactive: 3783012 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 147864 kB' 'Active(file): 1011828 kB' 'Inactive(file): 3635148 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'AnonPages: 166556 kB' 'Mapped: 68116 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261644 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64588 kB' 'KernelStack: 4400 kB' 'PageTables: 3620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.959 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.959 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.960 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.960 00:25:09 -- setup/common.sh@33 -- # echo 0 00:05:35.960 00:25:09 -- setup/common.sh@33 -- # return 0 00:05:35.960 00:25:09 -- setup/hugepages.sh@99 -- # surp=0 00:05:35.960 00:25:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.960 00:25:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.960 00:25:09 -- setup/common.sh@18 -- # local node= 00:05:35.960 00:25:09 -- setup/common.sh@19 -- # local var val 00:05:35.960 00:25:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.960 00:25:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.960 00:25:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.960 00:25:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.960 00:25:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.960 00:25:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.960 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.960 00:25:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969600 kB' 'MemAvailable: 9492116 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012872 kB' 'Inactive: 3783068 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 147920 kB' 'Active(file): 1011828 kB' 'Inactive(file): 3635148 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'AnonPages: 166564 kB' 'Mapped: 68116 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261644 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64588 kB' 'KernelStack: 4400 kB' 'PageTables: 3620 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # continue 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.961 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.961 00:25:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.222 00:25:09 -- setup/common.sh@33 -- # echo 0 00:05:36.222 00:25:09 -- setup/common.sh@33 -- # return 0 00:05:36.222 00:25:09 -- setup/hugepages.sh@100 -- # resv=0 00:05:36.222 00:25:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.222 nr_hugepages=1024 00:05:36.222 00:25:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.222 resv_hugepages=0 00:05:36.222 00:25:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.222 surplus_hugepages=0 00:05:36.222 00:25:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.222 anon_hugepages=0 00:05:36.222 00:25:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.222 00:25:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.222 00:25:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.222 00:25:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.222 00:25:09 -- setup/common.sh@18 -- # local node= 00:05:36.222 00:25:09 -- setup/common.sh@19 -- # local var val 00:05:36.222 00:25:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.222 00:25:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.222 00:25:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.222 00:25:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.222 00:25:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.222 00:25:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.222 00:25:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969096 kB' 'MemAvailable: 9491612 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012872 kB' 'Inactive: 3783340 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148192 kB' 'Active(file): 1011828 kB' 'Inactive(file): 3635148 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'AnonPages: 166848 kB' 'Mapped: 68116 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261644 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64588 kB' 'KernelStack: 4416 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.222 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.222 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.223 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.223 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.224 00:25:09 -- setup/common.sh@33 -- # echo 1024 00:05:36.224 00:25:09 -- setup/common.sh@33 -- # return 0 00:05:36.224 00:25:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.224 00:25:09 -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.224 00:25:09 -- setup/hugepages.sh@27 -- # local node 00:05:36.224 00:25:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.224 00:25:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.224 00:25:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.224 00:25:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.224 00:25:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.224 00:25:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.224 00:25:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.224 00:25:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.224 00:25:09 -- setup/common.sh@18 -- # local node=0 00:05:36.224 00:25:09 -- setup/common.sh@19 -- # local var val 00:05:36.224 00:25:09 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.224 00:25:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.224 00:25:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.224 00:25:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.224 00:25:09 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.224 00:25:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969096 kB' 'MemUsed: 7273876 kB' 'SwapCached: 0 kB' 'Active: 1012872 kB' 'Inactive: 3783348 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148200 kB' 'Active(file): 1011828 kB' 'Inactive(file): 3635148 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 556 kB' 'Writeback: 0 kB' 'FilePages: 4658636 kB' 'Mapped: 68116 kB' 'AnonPages: 166596 kB' 'Shmem: 2596 kB' 'KernelStack: 4484 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197056 kB' 'Slab: 261644 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.224 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.224 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 00:25:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.225 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.225 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.225 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.225 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.225 00:25:09 -- setup/common.sh@32 -- # continue 00:05:36.225 00:25:09 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.225 00:25:09 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.225 00:25:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.225 00:25:09 -- setup/common.sh@33 -- # echo 0 00:05:36.225 00:25:09 -- setup/common.sh@33 -- # return 0 00:05:36.225 00:25:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.225 00:25:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.225 00:25:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.225 00:25:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.225 00:25:09 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.225 node0=1024 expecting 1024 00:05:36.225 00:25:09 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.225 00:05:36.225 real 0m1.314s 00:05:36.225 user 0m0.393s 00:05:36.225 sys 0m0.802s 00:05:36.225 00:25:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.225 00:25:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.225 ************************************ 00:05:36.225 END TEST default_setup 00:05:36.225 ************************************ 00:05:36.225 00:25:09 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:36.225 00:25:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.225 00:25:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.225 00:25:09 -- common/autotest_common.sh@10 -- # set +x 00:05:36.225 ************************************ 00:05:36.225 START TEST per_node_1G_alloc 00:05:36.225 ************************************ 00:05:36.225 00:25:09 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:05:36.225 00:25:09 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:36.225 00:25:09 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:36.225 00:25:09 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:36.225 00:25:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:36.225 00:25:09 -- setup/hugepages.sh@51 -- # shift 00:05:36.225 00:25:09 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:36.225 00:25:09 -- setup/hugepages.sh@52 -- # local node_ids 00:05:36.225 00:25:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:36.225 00:25:09 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:36.225 00:25:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:36.225 00:25:09 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:36.225 00:25:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:36.225 00:25:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:36.225 00:25:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:36.225 00:25:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:36.225 00:25:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:36.225 00:25:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:36.225 00:25:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:36.225 00:25:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:36.225 00:25:09 -- setup/hugepages.sh@73 -- # return 0 00:05:36.225 00:25:09 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:36.225 00:25:09 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:36.225 00:25:09 -- setup/hugepages.sh@146 -- # setup output 00:05:36.225 00:25:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.225 00:25:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:36.484 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.056 00:25:10 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:37.056 00:25:10 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:37.056 00:25:10 -- setup/hugepages.sh@89 -- # local node 00:05:37.056 00:25:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.056 00:25:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.056 00:25:10 -- setup/hugepages.sh@92 -- # local surp 00:05:37.056 00:25:10 -- setup/hugepages.sh@93 -- # local resv 00:05:37.056 00:25:10 -- setup/hugepages.sh@94 -- # local anon 00:05:37.056 00:25:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.056 00:25:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.056 00:25:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.056 00:25:10 -- setup/common.sh@18 -- # local node= 00:05:37.056 00:25:10 -- setup/common.sh@19 -- # local var val 00:05:37.056 00:25:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.056 00:25:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.056 00:25:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.056 00:25:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.056 00:25:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.056 00:25:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6012744 kB' 'MemAvailable: 10535268 kB' 'Buffers: 35540 kB' 'Cached: 4623104 kB' 'SwapCached: 0 kB' 'Active: 1012900 kB' 'Inactive: 3783676 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 148536 kB' 'Active(file): 1011844 kB' 'Inactive(file): 3635140 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 592 kB' 'Writeback: 0 kB' 'AnonPages: 167160 kB' 'Mapped: 68148 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261532 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64476 kB' 'KernelStack: 4476 kB' 'PageTables: 3940 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.056 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.056 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.057 00:25:10 -- setup/common.sh@33 -- # echo 0 00:05:37.057 00:25:10 -- setup/common.sh@33 -- # return 0 00:05:37.057 00:25:10 -- setup/hugepages.sh@97 -- # anon=0 00:05:37.057 00:25:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.057 00:25:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.057 00:25:10 -- setup/common.sh@18 -- # local node= 00:05:37.057 00:25:10 -- setup/common.sh@19 -- # local var val 00:05:37.057 00:25:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.057 00:25:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.057 00:25:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.057 00:25:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.057 00:25:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.057 00:25:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.057 00:25:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6012520 kB' 'MemAvailable: 10535036 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012888 kB' 'Inactive: 3783392 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148260 kB' 'Active(file): 1011844 kB' 'Inactive(file): 3635132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 592 kB' 'Writeback: 0 kB' 'AnonPages: 166952 kB' 'Mapped: 68068 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261524 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64468 kB' 'KernelStack: 4440 kB' 'PageTables: 4008 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.057 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.057 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.058 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.058 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.059 00:25:10 -- setup/common.sh@33 -- # echo 0 00:05:37.059 00:25:10 -- setup/common.sh@33 -- # return 0 00:05:37.059 00:25:10 -- setup/hugepages.sh@99 -- # surp=0 00:05:37.059 00:25:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.059 00:25:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.059 00:25:10 -- setup/common.sh@18 -- # local node= 00:05:37.059 00:25:10 -- setup/common.sh@19 -- # local var val 00:05:37.059 00:25:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.059 00:25:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.059 00:25:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.059 00:25:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.059 00:25:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.059 00:25:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6012520 kB' 'MemAvailable: 10535036 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012888 kB' 'Inactive: 3783392 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148260 kB' 'Active(file): 1011844 kB' 'Inactive(file): 3635132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 592 kB' 'Writeback: 0 kB' 'AnonPages: 166952 kB' 'Mapped: 68068 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261524 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64468 kB' 'KernelStack: 4440 kB' 'PageTables: 4008 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.059 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.059 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.060 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.060 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.061 00:25:10 -- setup/common.sh@33 -- # echo 0 00:05:37.061 00:25:10 -- setup/common.sh@33 -- # return 0 00:05:37.061 00:25:10 -- setup/hugepages.sh@100 -- # resv=0 00:05:37.061 nr_hugepages=512 00:05:37.061 00:25:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:37.061 resv_hugepages=0 00:05:37.061 00:25:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.061 surplus_hugepages=0 00:05:37.061 00:25:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.061 anon_hugepages=0 00:05:37.061 00:25:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.061 00:25:10 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.061 00:25:10 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:37.061 00:25:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.061 00:25:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.061 00:25:10 -- setup/common.sh@18 -- # local node= 00:05:37.061 00:25:10 -- setup/common.sh@19 -- # local var val 00:05:37.061 00:25:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.061 00:25:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.061 00:25:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.061 00:25:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.061 00:25:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.061 00:25:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6012772 kB' 'MemAvailable: 10535288 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012888 kB' 'Inactive: 3783196 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148064 kB' 'Active(file): 1011844 kB' 'Inactive(file): 3635132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 596 kB' 'Writeback: 0 kB' 'AnonPages: 166696 kB' 'Mapped: 68108 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261524 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64468 kB' 'KernelStack: 4464 kB' 'PageTables: 3764 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.061 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.061 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.062 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.062 00:25:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.063 00:25:10 -- setup/common.sh@33 -- # echo 512 00:05:37.063 00:25:10 -- setup/common.sh@33 -- # return 0 00:05:37.063 00:25:10 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.063 00:25:10 -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.063 00:25:10 -- setup/hugepages.sh@27 -- # local node 00:05:37.063 00:25:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.063 00:25:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:37.063 00:25:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.063 00:25:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.063 00:25:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.063 00:25:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.063 00:25:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.063 00:25:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.063 00:25:10 -- setup/common.sh@18 -- # local node=0 00:05:37.063 00:25:10 -- setup/common.sh@19 -- # local var val 00:05:37.063 00:25:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.063 00:25:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.063 00:25:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.063 00:25:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.063 00:25:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.063 00:25:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6012772 kB' 'MemUsed: 6230200 kB' 'SwapCached: 0 kB' 'Active: 1012888 kB' 'Inactive: 3783196 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148064 kB' 'Active(file): 1011844 kB' 'Inactive(file): 3635132 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 596 kB' 'Writeback: 0 kB' 'FilePages: 4658636 kB' 'Mapped: 68108 kB' 'AnonPages: 166696 kB' 'Shmem: 2596 kB' 'KernelStack: 4532 kB' 'PageTables: 3764 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197056 kB' 'Slab: 261524 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.063 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.063 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # continue 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.064 00:25:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.064 00:25:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.064 00:25:10 -- setup/common.sh@33 -- # echo 0 00:05:37.064 00:25:10 -- setup/common.sh@33 -- # return 0 00:05:37.064 00:25:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.064 00:25:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.064 00:25:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.064 00:25:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.064 node0=512 expecting 512 00:05:37.064 00:25:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:37.064 00:25:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:37.064 00:05:37.064 real 0m0.732s 00:05:37.064 user 0m0.310s 00:05:37.064 sys 0m0.454s 00:05:37.064 00:25:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.064 00:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.064 ************************************ 00:05:37.064 END TEST per_node_1G_alloc 00:05:37.064 ************************************ 00:05:37.064 00:25:10 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:37.064 00:25:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.064 00:25:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.064 00:25:10 -- common/autotest_common.sh@10 -- # set +x 00:05:37.064 ************************************ 00:05:37.064 START TEST even_2G_alloc 00:05:37.064 ************************************ 00:05:37.064 00:25:10 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:05:37.064 00:25:10 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:37.064 00:25:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:37.064 00:25:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:37.064 00:25:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.064 00:25:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:37.064 00:25:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:37.064 00:25:10 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:37.064 00:25:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.064 00:25:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:37.064 00:25:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:37.064 00:25:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.064 00:25:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.064 00:25:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:37.064 00:25:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:37.064 00:25:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:37.064 00:25:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:37.064 00:25:10 -- setup/hugepages.sh@83 -- # : 0 00:05:37.064 00:25:10 -- setup/hugepages.sh@84 -- # : 0 00:05:37.064 00:25:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:37.064 00:25:10 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:37.064 00:25:10 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:37.064 00:25:10 -- setup/hugepages.sh@153 -- # setup output 00:05:37.064 00:25:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.064 00:25:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:37.323 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.892 00:25:11 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:37.892 00:25:11 -- setup/hugepages.sh@89 -- # local node 00:05:37.892 00:25:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.892 00:25:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.892 00:25:11 -- setup/hugepages.sh@92 -- # local surp 00:05:37.892 00:25:11 -- setup/hugepages.sh@93 -- # local resv 00:05:37.892 00:25:11 -- setup/hugepages.sh@94 -- # local anon 00:05:37.892 00:25:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.892 00:25:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.892 00:25:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.892 00:25:11 -- setup/common.sh@18 -- # local node= 00:05:37.892 00:25:11 -- setup/common.sh@19 -- # local var val 00:05:37.892 00:25:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.892 00:25:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.892 00:25:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.892 00:25:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.892 00:25:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.892 00:25:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4963452 kB' 'MemAvailable: 9485968 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012912 kB' 'Inactive: 3782996 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 147884 kB' 'Active(file): 1011864 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 640 kB' 'Writeback: 0 kB' 'AnonPages: 166796 kB' 'Mapped: 68116 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261704 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64648 kB' 'KernelStack: 4416 kB' 'PageTables: 3676 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.892 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.892 00:25:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.893 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.893 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.155 00:25:11 -- setup/common.sh@33 -- # echo 0 00:05:38.155 00:25:11 -- setup/common.sh@33 -- # return 0 00:05:38.155 00:25:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:38.155 00:25:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:38.155 00:25:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.155 00:25:11 -- setup/common.sh@18 -- # local node= 00:05:38.155 00:25:11 -- setup/common.sh@19 -- # local var val 00:05:38.155 00:25:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.155 00:25:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.155 00:25:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.155 00:25:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.155 00:25:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.155 00:25:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.155 00:25:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4963228 kB' 'MemAvailable: 9485744 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012908 kB' 'Inactive: 3782968 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 147856 kB' 'Active(file): 1011864 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 166504 kB' 'Mapped: 68116 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261728 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64672 kB' 'KernelStack: 4384 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.155 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.155 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.156 00:25:11 -- setup/common.sh@33 -- # echo 0 00:05:38.156 00:25:11 -- setup/common.sh@33 -- # return 0 00:05:38.156 00:25:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:38.156 00:25:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:38.156 00:25:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:38.156 00:25:11 -- setup/common.sh@18 -- # local node= 00:05:38.156 00:25:11 -- setup/common.sh@19 -- # local var val 00:05:38.156 00:25:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.156 00:25:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.156 00:25:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.156 00:25:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.156 00:25:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.156 00:25:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4963228 kB' 'MemAvailable: 9485744 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012908 kB' 'Inactive: 3782828 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 147716 kB' 'Active(file): 1011864 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 166652 kB' 'Mapped: 68116 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261728 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64672 kB' 'KernelStack: 4384 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.156 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.156 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.157 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.157 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.157 00:25:11 -- setup/common.sh@33 -- # echo 0 00:05:38.157 00:25:11 -- setup/common.sh@33 -- # return 0 00:05:38.157 nr_hugepages=1024 00:05:38.157 resv_hugepages=0 00:05:38.157 surplus_hugepages=0 00:05:38.157 anon_hugepages=0 00:05:38.157 00:25:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:38.158 00:25:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:38.158 00:25:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:38.158 00:25:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:38.158 00:25:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:38.158 00:25:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:38.158 00:25:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:38.158 00:25:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:38.158 00:25:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:38.158 00:25:11 -- setup/common.sh@18 -- # local node= 00:05:38.158 00:25:11 -- setup/common.sh@19 -- # local var val 00:05:38.158 00:25:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.158 00:25:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.158 00:25:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.158 00:25:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.158 00:25:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.158 00:25:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4963984 kB' 'MemAvailable: 9486500 kB' 'Buffers: 35540 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012908 kB' 'Inactive: 3783152 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148040 kB' 'Active(file): 1011864 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 166724 kB' 'Mapped: 68116 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261616 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64560 kB' 'KernelStack: 4384 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.158 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.158 00:25:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.159 00:25:11 -- setup/common.sh@33 -- # echo 1024 00:05:38.159 00:25:11 -- setup/common.sh@33 -- # return 0 00:05:38.159 00:25:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:38.159 00:25:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:38.159 00:25:11 -- setup/hugepages.sh@27 -- # local node 00:05:38.159 00:25:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.159 00:25:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:38.159 00:25:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.159 00:25:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.159 00:25:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:38.159 00:25:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:38.159 00:25:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:38.159 00:25:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.159 00:25:11 -- setup/common.sh@18 -- # local node=0 00:05:38.159 00:25:11 -- setup/common.sh@19 -- # local var val 00:05:38.159 00:25:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.159 00:25:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.159 00:25:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:38.159 00:25:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:38.159 00:25:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.159 00:25:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4963984 kB' 'MemUsed: 7278988 kB' 'SwapCached: 0 kB' 'Active: 1012908 kB' 'Inactive: 3783212 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 148100 kB' 'Active(file): 1011864 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 4658636 kB' 'Mapped: 68116 kB' 'AnonPages: 166800 kB' 'Shmem: 2596 kB' 'KernelStack: 4416 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197056 kB' 'Slab: 261616 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.159 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.159 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # continue 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.160 00:25:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.160 00:25:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.160 00:25:11 -- setup/common.sh@33 -- # echo 0 00:05:38.160 00:25:11 -- setup/common.sh@33 -- # return 0 00:05:38.160 00:25:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.160 00:25:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.160 00:25:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.160 00:25:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.160 00:25:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:38.160 node0=1024 expecting 1024 00:05:38.160 00:25:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:38.160 00:05:38.160 real 0m1.135s 00:05:38.160 user 0m0.280s 00:05:38.160 sys 0m0.787s 00:05:38.160 00:25:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.160 00:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.160 ************************************ 00:05:38.160 END TEST even_2G_alloc 00:05:38.160 ************************************ 00:05:38.160 00:25:11 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:38.160 00:25:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.419 00:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.419 00:25:11 -- common/autotest_common.sh@10 -- # set +x 00:05:38.419 ************************************ 00:05:38.419 START TEST odd_alloc 00:05:38.419 ************************************ 00:05:38.419 00:25:11 -- common/autotest_common.sh@1111 -- # odd_alloc 00:05:38.419 00:25:11 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:38.419 00:25:11 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:38.419 00:25:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.419 00:25:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.419 00:25:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:38.419 00:25:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.419 00:25:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.419 00:25:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.419 00:25:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:38.419 00:25:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.419 00:25:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.419 00:25:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.419 00:25:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.419 00:25:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.419 00:25:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.419 00:25:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:38.419 00:25:11 -- setup/hugepages.sh@83 -- # : 0 00:05:38.419 00:25:11 -- setup/hugepages.sh@84 -- # : 0 00:05:38.419 00:25:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.419 00:25:11 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:38.419 00:25:11 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:38.419 00:25:11 -- setup/hugepages.sh@160 -- # setup output 00:05:38.419 00:25:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.419 00:25:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:38.679 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.250 00:25:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:39.250 00:25:12 -- setup/hugepages.sh@89 -- # local node 00:05:39.250 00:25:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.250 00:25:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.250 00:25:12 -- setup/hugepages.sh@92 -- # local surp 00:05:39.250 00:25:12 -- setup/hugepages.sh@93 -- # local resv 00:05:39.250 00:25:12 -- setup/hugepages.sh@94 -- # local anon 00:05:39.250 00:25:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.250 00:25:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.250 00:25:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.250 00:25:12 -- setup/common.sh@18 -- # local node= 00:05:39.250 00:25:12 -- setup/common.sh@19 -- # local var val 00:05:39.250 00:25:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.250 00:25:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.250 00:25:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.250 00:25:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.250 00:25:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.250 00:25:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.250 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4959664 kB' 'MemAvailable: 9482188 kB' 'Buffers: 35548 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012924 kB' 'Inactive: 3783392 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 148280 kB' 'Active(file): 1011872 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 596 kB' 'Writeback: 0 kB' 'AnonPages: 166880 kB' 'Mapped: 68124 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261488 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64432 kB' 'KernelStack: 4416 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 524584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.251 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.251 00:25:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.252 00:25:12 -- setup/common.sh@33 -- # echo 0 00:05:39.252 00:25:12 -- setup/common.sh@33 -- # return 0 00:05:39.252 00:25:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.252 00:25:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.252 00:25:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.252 00:25:12 -- setup/common.sh@18 -- # local node= 00:05:39.252 00:25:12 -- setup/common.sh@19 -- # local var val 00:05:39.252 00:25:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.252 00:25:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.252 00:25:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.252 00:25:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.252 00:25:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.252 00:25:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4961228 kB' 'MemAvailable: 9483752 kB' 'Buffers: 35548 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012924 kB' 'Inactive: 3778624 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143512 kB' 'Active(file): 1011872 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 596 kB' 'Writeback: 0 kB' 'AnonPages: 162104 kB' 'Mapped: 67344 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261488 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64432 kB' 'KernelStack: 4368 kB' 'PageTables: 3552 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 510220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.252 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.252 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.253 00:25:12 -- setup/common.sh@33 -- # echo 0 00:05:39.253 00:25:12 -- setup/common.sh@33 -- # return 0 00:05:39.253 00:25:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.253 00:25:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.253 00:25:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.253 00:25:12 -- setup/common.sh@18 -- # local node= 00:05:39.253 00:25:12 -- setup/common.sh@19 -- # local var val 00:05:39.253 00:25:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.253 00:25:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.253 00:25:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.253 00:25:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.253 00:25:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.253 00:25:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4961228 kB' 'MemAvailable: 9483752 kB' 'Buffers: 35548 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012916 kB' 'Inactive: 3777628 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142516 kB' 'Active(file): 1011872 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 600 kB' 'Writeback: 0 kB' 'AnonPages: 161192 kB' 'Mapped: 67304 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261432 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64376 kB' 'KernelStack: 4240 kB' 'PageTables: 3176 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 510220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.253 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.253 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.254 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.254 00:25:12 -- setup/common.sh@33 -- # echo 0 00:05:39.254 00:25:12 -- setup/common.sh@33 -- # return 0 00:05:39.254 00:25:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.254 00:25:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:39.254 nr_hugepages=1025 00:05:39.254 00:25:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.254 resv_hugepages=0 00:05:39.254 00:25:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.254 surplus_hugepages=0 00:05:39.254 00:25:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.254 anon_hugepages=0 00:05:39.254 00:25:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:39.254 00:25:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:39.254 00:25:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.254 00:25:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.254 00:25:12 -- setup/common.sh@18 -- # local node= 00:05:39.254 00:25:12 -- setup/common.sh@19 -- # local var val 00:05:39.254 00:25:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.254 00:25:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.254 00:25:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.254 00:25:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.254 00:25:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.254 00:25:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.254 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4961428 kB' 'MemAvailable: 9483952 kB' 'Buffers: 35548 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012916 kB' 'Inactive: 3777476 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142364 kB' 'Active(file): 1011872 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 600 kB' 'Writeback: 0 kB' 'AnonPages: 161040 kB' 'Mapped: 67304 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261432 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64376 kB' 'KernelStack: 4276 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071884 kB' 'Committed_AS: 510220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.255 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.255 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.256 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.256 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.256 00:25:12 -- setup/common.sh@33 -- # echo 1025 00:05:39.256 00:25:12 -- setup/common.sh@33 -- # return 0 00:05:39.256 00:25:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:39.256 00:25:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.256 00:25:12 -- setup/hugepages.sh@27 -- # local node 00:05:39.256 00:25:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.533 00:25:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:39.533 00:25:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.533 00:25:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.533 00:25:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.533 00:25:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.533 00:25:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.533 00:25:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.533 00:25:12 -- setup/common.sh@18 -- # local node=0 00:05:39.533 00:25:12 -- setup/common.sh@19 -- # local var val 00:05:39.533 00:25:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.533 00:25:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.533 00:25:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.533 00:25:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.533 00:25:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.533 00:25:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.533 00:25:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4961428 kB' 'MemUsed: 7281544 kB' 'SwapCached: 0 kB' 'Active: 1012916 kB' 'Inactive: 3777476 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142364 kB' 'Active(file): 1011872 kB' 'Inactive(file): 3635112 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 600 kB' 'Writeback: 0 kB' 'FilePages: 4658644 kB' 'Mapped: 67304 kB' 'AnonPages: 161036 kB' 'Shmem: 2596 kB' 'KernelStack: 4296 kB' 'PageTables: 3228 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197056 kB' 'Slab: 261432 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.533 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.533 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # continue 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.534 00:25:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.534 00:25:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.534 00:25:12 -- setup/common.sh@33 -- # echo 0 00:05:39.534 00:25:12 -- setup/common.sh@33 -- # return 0 00:05:39.534 00:25:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.534 00:25:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.534 00:25:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.534 00:25:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.534 00:25:12 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:39.534 node0=1025 expecting 1025 00:05:39.534 00:25:12 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:39.534 00:05:39.534 real 0m1.087s 00:05:39.534 user 0m0.296s 00:05:39.534 sys 0m0.742s 00:05:39.534 00:25:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.534 00:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.534 ************************************ 00:05:39.534 END TEST odd_alloc 00:05:39.534 ************************************ 00:05:39.534 00:25:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:39.534 00:25:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.534 00:25:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.534 00:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:39.534 ************************************ 00:05:39.534 START TEST custom_alloc 00:05:39.534 ************************************ 00:05:39.535 00:25:12 -- common/autotest_common.sh@1111 -- # custom_alloc 00:05:39.535 00:25:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:39.535 00:25:12 -- setup/hugepages.sh@169 -- # local node 00:05:39.535 00:25:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:39.535 00:25:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:39.535 00:25:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:39.535 00:25:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:39.535 00:25:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:39.535 00:25:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:39.535 00:25:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:39.535 00:25:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:39.535 00:25:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.535 00:25:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:39.535 00:25:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.535 00:25:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.535 00:25:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.535 00:25:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:39.535 00:25:12 -- setup/hugepages.sh@83 -- # : 0 00:05:39.535 00:25:12 -- setup/hugepages.sh@84 -- # : 0 00:05:39.535 00:25:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:39.535 00:25:12 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:39.535 00:25:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:39.535 00:25:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:39.535 00:25:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:39.535 00:25:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.535 00:25:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:39.535 00:25:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.535 00:25:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.535 00:25:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.535 00:25:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:39.535 00:25:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:39.535 00:25:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:39.535 00:25:12 -- setup/hugepages.sh@78 -- # return 0 00:05:39.535 00:25:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:39.535 00:25:12 -- setup/hugepages.sh@187 -- # setup output 00:05:39.535 00:25:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.535 00:25:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:39.803 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.061 00:25:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:40.061 00:25:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:40.061 00:25:13 -- setup/hugepages.sh@89 -- # local node 00:05:40.061 00:25:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.061 00:25:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.061 00:25:13 -- setup/hugepages.sh@92 -- # local surp 00:05:40.061 00:25:13 -- setup/hugepages.sh@93 -- # local resv 00:05:40.061 00:25:13 -- setup/hugepages.sh@94 -- # local anon 00:05:40.061 00:25:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.061 00:25:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.061 00:25:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.061 00:25:13 -- setup/common.sh@18 -- # local node= 00:05:40.061 00:25:13 -- setup/common.sh@19 -- # local var val 00:05:40.061 00:25:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.061 00:25:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.061 00:25:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.061 00:25:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.061 00:25:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.061 00:25:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6013308 kB' 'MemAvailable: 10535828 kB' 'Buffers: 35548 kB' 'Cached: 4623092 kB' 'SwapCached: 0 kB' 'Active: 1012940 kB' 'Inactive: 3777432 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142336 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635096 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 112 kB' 'Writeback: 0 kB' 'AnonPages: 161216 kB' 'Mapped: 67528 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261444 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64388 kB' 'KernelStack: 4324 kB' 'PageTables: 3220 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.062 00:25:13 -- setup/common.sh@33 -- # echo 0 00:05:40.062 00:25:13 -- setup/common.sh@33 -- # return 0 00:05:40.062 00:25:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.062 00:25:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.062 00:25:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.062 00:25:13 -- setup/common.sh@18 -- # local node= 00:05:40.062 00:25:13 -- setup/common.sh@19 -- # local var val 00:05:40.062 00:25:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.062 00:25:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.062 00:25:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.062 00:25:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.062 00:25:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.062 00:25:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.062 00:25:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6014348 kB' 'MemAvailable: 10536872 kB' 'Buffers: 35548 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012940 kB' 'Inactive: 3777836 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 142736 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 161176 kB' 'Mapped: 67528 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261444 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64388 kB' 'KernelStack: 4288 kB' 'PageTables: 3292 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.062 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.062 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.063 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.324 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.324 00:25:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.325 00:25:13 -- setup/common.sh@33 -- # echo 0 00:05:40.325 00:25:13 -- setup/common.sh@33 -- # return 0 00:05:40.325 00:25:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.325 00:25:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.325 00:25:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.325 00:25:13 -- setup/common.sh@18 -- # local node= 00:05:40.325 00:25:13 -- setup/common.sh@19 -- # local var val 00:05:40.325 00:25:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.325 00:25:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.325 00:25:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.325 00:25:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.325 00:25:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.325 00:25:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6013844 kB' 'MemAvailable: 10536368 kB' 'Buffers: 35548 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777632 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142532 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 161152 kB' 'Mapped: 67308 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261468 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64412 kB' 'KernelStack: 4240 kB' 'PageTables: 3164 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.325 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.325 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.326 00:25:13 -- setup/common.sh@33 -- # echo 0 00:05:40.326 00:25:13 -- setup/common.sh@33 -- # return 0 00:05:40.326 00:25:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.326 00:25:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:40.326 nr_hugepages=512 00:05:40.326 00:25:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.326 resv_hugepages=0 00:05:40.326 00:25:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.326 surplus_hugepages=0 00:05:40.326 00:25:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.326 anon_hugepages=0 00:05:40.326 00:25:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.326 00:25:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:40.326 00:25:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.326 00:25:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.326 00:25:13 -- setup/common.sh@18 -- # local node= 00:05:40.326 00:25:13 -- setup/common.sh@19 -- # local var val 00:05:40.326 00:25:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.326 00:25:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.326 00:25:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.326 00:25:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.326 00:25:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.326 00:25:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6013844 kB' 'MemAvailable: 10536368 kB' 'Buffers: 35548 kB' 'Cached: 4623096 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777392 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142292 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'AnonPages: 160912 kB' 'Mapped: 67308 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261468 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64412 kB' 'KernelStack: 4292 kB' 'PageTables: 3124 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597196 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.326 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.326 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.327 00:25:13 -- setup/common.sh@33 -- # echo 512 00:05:40.327 00:25:13 -- setup/common.sh@33 -- # return 0 00:05:40.327 00:25:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.327 00:25:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.327 00:25:13 -- setup/hugepages.sh@27 -- # local node 00:05:40.327 00:25:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.327 00:25:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:40.327 00:25:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.327 00:25:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.327 00:25:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.327 00:25:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.327 00:25:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.327 00:25:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.327 00:25:13 -- setup/common.sh@18 -- # local node=0 00:05:40.327 00:25:13 -- setup/common.sh@19 -- # local var val 00:05:40.327 00:25:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.327 00:25:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.327 00:25:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.327 00:25:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.327 00:25:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.327 00:25:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 6013844 kB' 'MemUsed: 6229128 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777676 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142576 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 180 kB' 'Writeback: 0 kB' 'FilePages: 4658644 kB' 'Mapped: 67308 kB' 'AnonPages: 161248 kB' 'Shmem: 2596 kB' 'KernelStack: 4272 kB' 'PageTables: 3248 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197056 kB' 'Slab: 261508 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.327 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.327 00:25:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # continue 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.328 00:25:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.328 00:25:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.328 00:25:13 -- setup/common.sh@33 -- # echo 0 00:05:40.328 00:25:13 -- setup/common.sh@33 -- # return 0 00:05:40.328 00:25:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.328 00:25:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.328 00:25:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.328 00:25:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.328 00:25:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:40.328 node0=512 expecting 512 00:05:40.328 00:25:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:40.328 00:05:40.328 real 0m0.872s 00:05:40.328 user 0m0.294s 00:05:40.328 sys 0m0.521s 00:05:40.328 00:25:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.328 00:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.328 ************************************ 00:05:40.328 END TEST custom_alloc 00:05:40.328 ************************************ 00:05:40.329 00:25:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:40.329 00:25:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.329 00:25:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.329 00:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:40.587 ************************************ 00:05:40.587 START TEST no_shrink_alloc 00:05:40.587 ************************************ 00:05:40.587 00:25:13 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:05:40.587 00:25:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:40.587 00:25:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:40.587 00:25:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:40.587 00:25:13 -- setup/hugepages.sh@51 -- # shift 00:05:40.587 00:25:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:40.587 00:25:13 -- setup/hugepages.sh@52 -- # local node_ids 00:05:40.587 00:25:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.587 00:25:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:40.587 00:25:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:40.587 00:25:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:40.587 00:25:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.587 00:25:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:40.587 00:25:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.587 00:25:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.587 00:25:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.587 00:25:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:40.587 00:25:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:40.587 00:25:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:40.587 00:25:13 -- setup/hugepages.sh@73 -- # return 0 00:05:40.587 00:25:13 -- setup/hugepages.sh@198 -- # setup output 00:05:40.587 00:25:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.587 00:25:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.846 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:40.846 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.417 00:25:14 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:41.417 00:25:14 -- setup/hugepages.sh@89 -- # local node 00:05:41.417 00:25:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.417 00:25:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.417 00:25:14 -- setup/hugepages.sh@92 -- # local surp 00:05:41.417 00:25:14 -- setup/hugepages.sh@93 -- # local resv 00:05:41.417 00:25:14 -- setup/hugepages.sh@94 -- # local anon 00:05:41.417 00:25:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.417 00:25:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.417 00:25:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.417 00:25:14 -- setup/common.sh@18 -- # local node= 00:05:41.417 00:25:14 -- setup/common.sh@19 -- # local var val 00:05:41.417 00:25:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.417 00:25:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.417 00:25:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.417 00:25:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.417 00:25:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.417 00:25:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4971004 kB' 'MemAvailable: 9493532 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012936 kB' 'Inactive: 3777996 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142892 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 161272 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261272 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64216 kB' 'KernelStack: 4288 kB' 'PageTables: 3288 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.417 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.417 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.418 00:25:14 -- setup/common.sh@33 -- # echo 0 00:05:41.418 00:25:14 -- setup/common.sh@33 -- # return 0 00:05:41.418 00:25:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:41.418 00:25:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.418 00:25:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.418 00:25:14 -- setup/common.sh@18 -- # local node= 00:05:41.418 00:25:14 -- setup/common.sh@19 -- # local var val 00:05:41.418 00:25:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.418 00:25:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.418 00:25:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.418 00:25:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.418 00:25:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.418 00:25:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.418 00:25:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4971004 kB' 'MemAvailable: 9493532 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012936 kB' 'Inactive: 3777968 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142864 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 161244 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261272 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64216 kB' 'KernelStack: 4272 kB' 'PageTables: 3248 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.418 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.418 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.419 00:25:14 -- setup/common.sh@33 -- # echo 0 00:05:41.419 00:25:14 -- setup/common.sh@33 -- # return 0 00:05:41.419 00:25:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:41.419 00:25:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.419 00:25:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.419 00:25:14 -- setup/common.sh@18 -- # local node= 00:05:41.419 00:25:14 -- setup/common.sh@19 -- # local var val 00:05:41.419 00:25:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.419 00:25:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.419 00:25:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.419 00:25:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.419 00:25:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.419 00:25:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4970780 kB' 'MemAvailable: 9493308 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777528 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142424 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 160980 kB' 'Mapped: 67312 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261296 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64240 kB' 'KernelStack: 4240 kB' 'PageTables: 3164 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.419 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.419 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.420 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.420 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.420 00:25:14 -- setup/common.sh@33 -- # echo 0 00:05:41.420 00:25:14 -- setup/common.sh@33 -- # return 0 00:05:41.420 00:25:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.420 nr_hugepages=1024 00:05:41.420 00:25:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.420 resv_hugepages=0 00:05:41.420 00:25:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.420 surplus_hugepages=0 00:05:41.420 00:25:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.420 anon_hugepages=0 00:05:41.420 00:25:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.420 00:25:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.420 00:25:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.420 00:25:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.420 00:25:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.420 00:25:14 -- setup/common.sh@18 -- # local node= 00:05:41.420 00:25:14 -- setup/common.sh@19 -- # local var val 00:05:41.420 00:25:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.420 00:25:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.420 00:25:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.420 00:25:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.420 00:25:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.421 00:25:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4970780 kB' 'MemAvailable: 9493308 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777768 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142664 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 161220 kB' 'Mapped: 67312 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261296 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64240 kB' 'KernelStack: 4292 kB' 'PageTables: 3384 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.421 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.421 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.422 00:25:14 -- setup/common.sh@33 -- # echo 1024 00:05:41.422 00:25:14 -- setup/common.sh@33 -- # return 0 00:05:41.422 00:25:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.422 00:25:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.422 00:25:14 -- setup/hugepages.sh@27 -- # local node 00:05:41.422 00:25:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.422 00:25:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.422 00:25:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.422 00:25:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.422 00:25:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.422 00:25:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.422 00:25:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.422 00:25:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.422 00:25:14 -- setup/common.sh@18 -- # local node=0 00:05:41.422 00:25:14 -- setup/common.sh@19 -- # local var val 00:05:41.422 00:25:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.422 00:25:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.422 00:25:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.422 00:25:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.422 00:25:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.422 00:25:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.422 00:25:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4970780 kB' 'MemUsed: 7272192 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777748 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142644 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'FilePages: 4658648 kB' 'Mapped: 67312 kB' 'AnonPages: 161200 kB' 'Shmem: 2596 kB' 'KernelStack: 4344 kB' 'PageTables: 3344 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197056 kB' 'Slab: 261296 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64240 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.422 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.422 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # continue 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.423 00:25:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.423 00:25:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.423 00:25:14 -- setup/common.sh@33 -- # echo 0 00:05:41.423 00:25:14 -- setup/common.sh@33 -- # return 0 00:05:41.423 00:25:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.423 00:25:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.423 00:25:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.423 00:25:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.423 node0=1024 expecting 1024 00:05:41.423 00:25:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.423 00:25:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.423 00:25:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:41.423 00:25:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:41.423 00:25:14 -- setup/hugepages.sh@202 -- # setup output 00:05:41.423 00:25:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.423 00:25:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.682 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:41.682 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.682 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:41.682 00:25:15 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:41.682 00:25:15 -- setup/hugepages.sh@89 -- # local node 00:05:41.682 00:25:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.682 00:25:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.682 00:25:15 -- setup/hugepages.sh@92 -- # local surp 00:05:41.682 00:25:15 -- setup/hugepages.sh@93 -- # local resv 00:05:41.682 00:25:15 -- setup/hugepages.sh@94 -- # local anon 00:05:41.682 00:25:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.682 00:25:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.682 00:25:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.682 00:25:15 -- setup/common.sh@18 -- # local node= 00:05:41.682 00:25:15 -- setup/common.sh@19 -- # local var val 00:05:41.682 00:25:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.682 00:25:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.682 00:25:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.682 00:25:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.682 00:25:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.682 00:25:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.682 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.682 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.682 00:25:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969564 kB' 'MemAvailable: 9492092 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012940 kB' 'Inactive: 3778200 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 143096 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 161812 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261456 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64400 kB' 'KernelStack: 4444 kB' 'PageTables: 3760 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19428 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.682 00:25:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.682 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.682 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.682 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.682 00:25:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.682 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.682 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.682 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.682 00:25:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.682 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.683 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.683 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.944 00:25:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.944 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.944 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.944 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.944 00:25:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.944 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.944 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.944 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.944 00:25:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.945 00:25:15 -- setup/common.sh@33 -- # echo 0 00:05:41.945 00:25:15 -- setup/common.sh@33 -- # return 0 00:05:41.945 00:25:15 -- setup/hugepages.sh@97 -- # anon=0 00:05:41.945 00:25:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.945 00:25:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.945 00:25:15 -- setup/common.sh@18 -- # local node= 00:05:41.945 00:25:15 -- setup/common.sh@19 -- # local var val 00:05:41.945 00:25:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.945 00:25:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.945 00:25:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.945 00:25:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.945 00:25:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.945 00:25:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969564 kB' 'MemAvailable: 9492092 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012940 kB' 'Inactive: 3778164 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 143060 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 161776 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261456 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64400 kB' 'KernelStack: 4428 kB' 'PageTables: 3720 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.945 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.945 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.946 00:25:15 -- setup/common.sh@33 -- # echo 0 00:05:41.946 00:25:15 -- setup/common.sh@33 -- # return 0 00:05:41.946 00:25:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:41.946 00:25:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.946 00:25:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.946 00:25:15 -- setup/common.sh@18 -- # local node= 00:05:41.946 00:25:15 -- setup/common.sh@19 -- # local var val 00:05:41.946 00:25:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.946 00:25:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.946 00:25:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.946 00:25:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.946 00:25:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.946 00:25:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.946 00:25:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969564 kB' 'MemAvailable: 9492092 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012932 kB' 'Inactive: 3778112 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 143008 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 161700 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261344 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64288 kB' 'KernelStack: 4396 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.946 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.946 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.947 00:25:15 -- setup/common.sh@33 -- # echo 0 00:05:41.947 00:25:15 -- setup/common.sh@33 -- # return 0 00:05:41.947 00:25:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.947 nr_hugepages=1024 00:05:41.947 00:25:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.947 resv_hugepages=0 00:05:41.947 00:25:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.947 surplus_hugepages=0 00:05:41.947 00:25:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.947 anon_hugepages=0 00:05:41.947 00:25:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.947 00:25:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.947 00:25:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.947 00:25:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.947 00:25:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.947 00:25:15 -- setup/common.sh@18 -- # local node= 00:05:41.947 00:25:15 -- setup/common.sh@19 -- # local var val 00:05:41.947 00:25:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.947 00:25:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.947 00:25:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.947 00:25:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.947 00:25:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.947 00:25:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969776 kB' 'MemAvailable: 9492304 kB' 'Buffers: 35548 kB' 'Cached: 4623100 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777732 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142628 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'AnonPages: 161196 kB' 'Mapped: 67316 kB' 'Shmem: 2596 kB' 'KReclaimable: 197056 kB' 'Slab: 261184 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64128 kB' 'KernelStack: 4288 kB' 'PageTables: 3272 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072908 kB' 'Committed_AS: 510348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 2990080 kB' 'DirectMap1G: 11534336 kB' 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.947 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.947 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.948 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.948 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.948 00:25:15 -- setup/common.sh@33 -- # echo 1024 00:05:41.948 00:25:15 -- setup/common.sh@33 -- # return 0 00:05:41.948 00:25:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.948 00:25:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.948 00:25:15 -- setup/hugepages.sh@27 -- # local node 00:05:41.948 00:25:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.948 00:25:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.948 00:25:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.948 00:25:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.948 00:25:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.949 00:25:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.949 00:25:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.949 00:25:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.949 00:25:15 -- setup/common.sh@18 -- # local node=0 00:05:41.949 00:25:15 -- setup/common.sh@19 -- # local var val 00:05:41.949 00:25:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.949 00:25:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.949 00:25:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.949 00:25:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.949 00:25:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.949 00:25:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.949 00:25:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242972 kB' 'MemFree: 4969524 kB' 'MemUsed: 7273448 kB' 'SwapCached: 0 kB' 'Active: 1012928 kB' 'Inactive: 3777664 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 142560 kB' 'Active(file): 1011884 kB' 'Inactive(file): 3635104 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 320 kB' 'Writeback: 0 kB' 'FilePages: 4658648 kB' 'Mapped: 67316 kB' 'AnonPages: 161192 kB' 'Shmem: 2596 kB' 'KernelStack: 4300 kB' 'PageTables: 3472 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197056 kB' 'Slab: 261184 kB' 'SReclaimable: 197056 kB' 'SUnreclaim: 64128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.949 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.949 00:25:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # continue 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.950 00:25:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.950 00:25:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.950 00:25:15 -- setup/common.sh@33 -- # echo 0 00:05:41.950 00:25:15 -- setup/common.sh@33 -- # return 0 00:05:41.950 00:25:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.950 00:25:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.950 00:25:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.950 00:25:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.950 node0=1024 expecting 1024 00:05:41.950 00:25:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.950 00:25:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.950 00:05:41.950 real 0m1.456s 00:05:41.950 user 0m0.556s 00:05:41.950 sys 0m0.974s 00:05:41.950 00:25:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.950 ************************************ 00:05:41.950 END TEST no_shrink_alloc 00:05:41.950 ************************************ 00:05:41.950 00:25:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.950 00:25:15 -- setup/hugepages.sh@217 -- # clear_hp 00:05:41.950 00:25:15 -- setup/hugepages.sh@37 -- # local node hp 00:05:41.950 00:25:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:41.950 00:25:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.950 00:25:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.950 00:25:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:41.950 00:25:15 -- setup/hugepages.sh@41 -- # echo 0 00:05:41.950 00:25:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:41.950 00:25:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:41.950 ************************************ 00:05:41.950 END TEST hugepages 00:05:41.950 ************************************ 00:05:41.950 00:05:41.950 real 0m7.288s 00:05:41.950 user 0m2.466s 00:05:41.950 sys 0m4.597s 00:05:41.950 00:25:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.950 00:25:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.950 00:25:15 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:41.950 00:25:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.950 00:25:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.950 00:25:15 -- common/autotest_common.sh@10 -- # set +x 00:05:41.950 ************************************ 00:05:41.950 START TEST driver 00:05:41.950 ************************************ 00:05:41.950 00:25:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:42.208 * Looking for test storage... 00:05:42.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:42.208 00:25:15 -- setup/driver.sh@68 -- # setup reset 00:05:42.208 00:25:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.208 00:25:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.467 00:25:16 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:42.467 00:25:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.467 00:25:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.467 00:25:16 -- common/autotest_common.sh@10 -- # set +x 00:05:42.725 ************************************ 00:05:42.725 START TEST guess_driver 00:05:42.725 ************************************ 00:05:42.725 00:25:16 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:42.725 00:25:16 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:42.725 00:25:16 -- setup/driver.sh@47 -- # local fail=0 00:05:42.725 00:25:16 -- setup/driver.sh@49 -- # pick_driver 00:05:42.725 00:25:16 -- setup/driver.sh@36 -- # vfio 00:05:42.725 00:25:16 -- setup/driver.sh@21 -- # local iommu_grups 00:05:42.725 00:25:16 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:42.725 00:25:16 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:42.725 00:25:16 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:42.725 00:25:16 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:42.725 00:25:16 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:42.725 00:25:16 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:42.725 00:25:16 -- setup/driver.sh@32 -- # return 1 00:05:42.725 00:25:16 -- setup/driver.sh@38 -- # uio 00:05:42.725 00:25:16 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:42.725 00:25:16 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:42.725 00:25:16 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:42.725 00:25:16 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:42.725 00:25:16 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:42.725 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:42.725 00:25:16 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:42.725 00:25:16 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:42.725 00:25:16 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:42.725 Looking for driver=uio_pci_generic 00:05:42.725 00:25:16 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:42.725 00:25:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.725 00:25:16 -- setup/driver.sh@45 -- # setup output config 00:05:42.725 00:25:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.725 00:25:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.982 00:25:16 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:42.982 00:25:16 -- setup/driver.sh@58 -- # continue 00:05:42.982 00:25:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.982 00:25:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.982 00:25:16 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.982 00:25:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:44.357 00:25:17 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:44.357 00:25:17 -- setup/driver.sh@65 -- # setup reset 00:05:44.357 00:25:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.357 00:25:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.616 00:05:44.616 real 0m2.022s 00:05:44.616 user 0m0.491s 00:05:44.616 sys 0m1.519s 00:05:44.616 00:25:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.616 ************************************ 00:05:44.616 END TEST guess_driver 00:05:44.616 ************************************ 00:05:44.616 00:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.616 00:05:44.616 real 0m2.620s 00:05:44.616 user 0m0.799s 00:05:44.616 sys 0m1.827s 00:05:44.616 00:25:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.616 00:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.616 ************************************ 00:05:44.616 END TEST driver 00:05:44.616 ************************************ 00:05:44.616 00:25:18 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:44.616 00:25:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.616 00:25:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.616 00:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:44.875 ************************************ 00:05:44.875 START TEST devices 00:05:44.875 ************************************ 00:05:44.875 00:25:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:44.875 * Looking for test storage... 00:05:44.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:44.875 00:25:18 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:44.875 00:25:18 -- setup/devices.sh@192 -- # setup reset 00:05:44.875 00:25:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.875 00:25:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.441 00:25:18 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:45.441 00:25:18 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:45.441 00:25:18 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:45.441 00:25:18 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:45.441 00:25:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:45.441 00:25:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:45.441 00:25:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:45.441 00:25:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:45.441 00:25:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:45.441 00:25:18 -- setup/devices.sh@196 -- # blocks=() 00:05:45.441 00:25:18 -- setup/devices.sh@196 -- # declare -a blocks 00:05:45.441 00:25:18 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:45.441 00:25:18 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:45.441 00:25:18 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:45.441 00:25:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:45.441 00:25:18 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:45.441 00:25:18 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:45.441 00:25:18 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:45.441 00:25:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:45.441 00:25:18 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:45.441 00:25:18 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:45.441 00:25:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:45.441 No valid GPT data, bailing 00:05:45.441 00:25:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:45.441 00:25:18 -- scripts/common.sh@391 -- # pt= 00:05:45.441 00:25:18 -- scripts/common.sh@392 -- # return 1 00:05:45.441 00:25:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:45.441 00:25:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:45.441 00:25:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:45.441 00:25:18 -- setup/common.sh@80 -- # echo 5368709120 00:05:45.441 00:25:18 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:45.441 00:25:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:45.441 00:25:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:45.441 00:25:18 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:45.441 00:25:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:45.441 00:25:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:45.441 00:25:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.441 00:25:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.441 00:25:18 -- common/autotest_common.sh@10 -- # set +x 00:05:45.441 ************************************ 00:05:45.441 START TEST nvme_mount 00:05:45.441 ************************************ 00:05:45.441 00:25:18 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:45.441 00:25:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:45.441 00:25:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:45.441 00:25:18 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.441 00:25:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:45.441 00:25:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:45.441 00:25:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:45.441 00:25:18 -- setup/common.sh@40 -- # local part_no=1 00:05:45.441 00:25:18 -- setup/common.sh@41 -- # local size=1073741824 00:05:45.441 00:25:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:45.442 00:25:18 -- setup/common.sh@44 -- # parts=() 00:05:45.442 00:25:18 -- setup/common.sh@44 -- # local parts 00:05:45.442 00:25:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:45.442 00:25:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.442 00:25:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:45.442 00:25:18 -- setup/common.sh@46 -- # (( part++ )) 00:05:45.442 00:25:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:45.442 00:25:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:45.442 00:25:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:45.442 00:25:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:46.377 Creating new GPT entries in memory. 00:05:46.377 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:46.377 other utilities. 00:05:46.377 00:25:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:46.377 00:25:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.377 00:25:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:46.377 00:25:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:46.377 00:25:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:47.753 Creating new GPT entries in memory. 00:05:47.753 The operation has completed successfully. 00:05:47.753 00:25:20 -- setup/common.sh@57 -- # (( part++ )) 00:05:47.753 00:25:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.753 00:25:20 -- setup/common.sh@62 -- # wait 103724 00:05:47.753 00:25:20 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.753 00:25:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:47.753 00:25:20 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.753 00:25:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:47.753 00:25:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:47.753 00:25:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.753 00:25:20 -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.753 00:25:20 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:47.753 00:25:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:47.753 00:25:20 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.753 00:25:20 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.753 00:25:20 -- setup/devices.sh@53 -- # local found=0 00:05:47.753 00:25:20 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.753 00:25:20 -- setup/devices.sh@56 -- # : 00:05:47.753 00:25:20 -- setup/devices.sh@59 -- # local pci status 00:05:47.753 00:25:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.753 00:25:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:47.753 00:25:20 -- setup/devices.sh@47 -- # setup output config 00:05:47.753 00:25:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.753 00:25:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.753 00:25:21 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:47.753 00:25:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:47.753 00:25:21 -- setup/devices.sh@63 -- # found=1 00:05:47.753 00:25:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.753 00:25:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:47.753 00:25:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.753 00:25:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:47.753 00:25:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.132 00:25:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.132 00:25:22 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:49.132 00:25:22 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.132 00:25:22 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.132 00:25:22 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.132 00:25:22 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:49.132 00:25:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.132 00:25:22 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.132 00:25:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.132 00:25:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:49.132 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.132 00:25:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.132 00:25:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.132 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.132 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.132 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:49.132 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:49.132 00:25:22 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:49.132 00:25:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:49.132 00:25:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.132 00:25:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:49.132 00:25:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:49.132 00:25:22 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.132 00:25:22 -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.132 00:25:22 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:49.132 00:25:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:49.132 00:25:22 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.132 00:25:22 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:49.132 00:25:22 -- setup/devices.sh@53 -- # local found=0 00:05:49.132 00:25:22 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:49.132 00:25:22 -- setup/devices.sh@56 -- # : 00:05:49.132 00:25:22 -- setup/devices.sh@59 -- # local pci status 00:05:49.132 00:25:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.132 00:25:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:49.132 00:25:22 -- setup/devices.sh@47 -- # setup output config 00:05:49.132 00:25:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.132 00:25:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:49.132 00:25:22 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:49.132 00:25:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:49.132 00:25:22 -- setup/devices.sh@63 -- # found=1 00:05:49.132 00:25:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.132 00:25:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:49.132 00:25:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.391 00:25:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:49.391 00:25:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.325 00:25:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.325 00:25:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:50.325 00:25:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.325 00:25:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.325 00:25:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.325 00:25:23 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.325 00:25:23 -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:50.325 00:25:23 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:50.325 00:25:23 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:50.325 00:25:23 -- setup/devices.sh@50 -- # local mount_point= 00:05:50.325 00:25:23 -- setup/devices.sh@51 -- # local test_file= 00:05:50.325 00:25:23 -- setup/devices.sh@53 -- # local found=0 00:05:50.325 00:25:23 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:50.325 00:25:23 -- setup/devices.sh@59 -- # local pci status 00:05:50.325 00:25:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.325 00:25:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:50.325 00:25:23 -- setup/devices.sh@47 -- # setup output config 00:05:50.325 00:25:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.325 00:25:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.893 00:25:24 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:50.893 00:25:24 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:50.893 00:25:24 -- setup/devices.sh@63 -- # found=1 00:05:50.893 00:25:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.893 00:25:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:50.893 00:25:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.893 00:25:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:50.893 00:25:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.829 00:25:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.829 00:25:25 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:51.829 00:25:25 -- setup/devices.sh@68 -- # return 0 00:05:51.829 00:25:25 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:51.829 00:25:25 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.829 00:25:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.829 00:25:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.829 00:25:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:51.829 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:51.829 00:05:51.829 real 0m6.489s 00:05:51.829 user 0m0.766s 00:05:51.829 sys 0m3.732s 00:05:51.829 00:25:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.829 ************************************ 00:05:51.829 END TEST nvme_mount 00:05:51.829 ************************************ 00:05:51.829 00:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:51.830 00:25:25 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:51.830 00:25:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.830 00:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.830 00:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:52.089 ************************************ 00:05:52.089 START TEST dm_mount 00:05:52.089 ************************************ 00:05:52.089 00:25:25 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:52.089 00:25:25 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:52.089 00:25:25 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:52.089 00:25:25 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:52.089 00:25:25 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:52.089 00:25:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:52.089 00:25:25 -- setup/common.sh@40 -- # local part_no=2 00:05:52.089 00:25:25 -- setup/common.sh@41 -- # local size=1073741824 00:05:52.089 00:25:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:52.089 00:25:25 -- setup/common.sh@44 -- # parts=() 00:05:52.089 00:25:25 -- setup/common.sh@44 -- # local parts 00:05:52.089 00:25:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:52.089 00:25:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:52.089 00:25:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:52.089 00:25:25 -- setup/common.sh@46 -- # (( part++ )) 00:05:52.089 00:25:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:52.089 00:25:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:52.089 00:25:25 -- setup/common.sh@46 -- # (( part++ )) 00:05:52.089 00:25:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:52.089 00:25:25 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:52.089 00:25:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:52.089 00:25:25 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:53.025 Creating new GPT entries in memory. 00:05:53.025 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:53.025 other utilities. 00:05:53.025 00:25:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:53.025 00:25:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:53.025 00:25:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:53.025 00:25:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:53.025 00:25:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:53.960 Creating new GPT entries in memory. 00:05:53.960 The operation has completed successfully. 00:05:53.961 00:25:27 -- setup/common.sh@57 -- # (( part++ )) 00:05:53.961 00:25:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:53.961 00:25:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:53.961 00:25:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:53.961 00:25:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:55.337 The operation has completed successfully. 00:05:55.337 00:25:28 -- setup/common.sh@57 -- # (( part++ )) 00:05:55.337 00:25:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:55.337 00:25:28 -- setup/common.sh@62 -- # wait 104218 00:05:55.337 00:25:28 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:55.337 00:25:28 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.337 00:25:28 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:55.337 00:25:28 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:55.337 00:25:28 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:55.337 00:25:28 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:55.337 00:25:28 -- setup/devices.sh@161 -- # break 00:05:55.337 00:25:28 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:55.337 00:25:28 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:55.337 00:25:28 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:55.337 00:25:28 -- setup/devices.sh@166 -- # dm=dm-0 00:05:55.337 00:25:28 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:55.337 00:25:28 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:55.337 00:25:28 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.337 00:25:28 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:55.337 00:25:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.337 00:25:28 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:55.337 00:25:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:55.337 00:25:28 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.337 00:25:28 -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:55.337 00:25:28 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:55.337 00:25:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:55.337 00:25:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.337 00:25:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:55.337 00:25:28 -- setup/devices.sh@53 -- # local found=0 00:05:55.337 00:25:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:55.337 00:25:28 -- setup/devices.sh@56 -- # : 00:05:55.337 00:25:28 -- setup/devices.sh@59 -- # local pci status 00:05:55.337 00:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.337 00:25:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:55.337 00:25:28 -- setup/devices.sh@47 -- # setup output config 00:05:55.337 00:25:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.337 00:25:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:55.337 00:25:28 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:55.337 00:25:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:55.337 00:25:28 -- setup/devices.sh@63 -- # found=1 00:05:55.337 00:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.337 00:25:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:55.337 00:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.596 00:25:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:55.596 00:25:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.532 00:25:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:56.532 00:25:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:56.532 00:25:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:56.532 00:25:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:56.532 00:25:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:56.532 00:25:30 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:56.532 00:25:30 -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:56.532 00:25:30 -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:56.532 00:25:30 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:56.532 00:25:30 -- setup/devices.sh@50 -- # local mount_point= 00:05:56.532 00:25:30 -- setup/devices.sh@51 -- # local test_file= 00:05:56.532 00:25:30 -- setup/devices.sh@53 -- # local found=0 00:05:56.532 00:25:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:56.532 00:25:30 -- setup/devices.sh@59 -- # local pci status 00:05:56.532 00:25:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.532 00:25:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:56.532 00:25:30 -- setup/devices.sh@47 -- # setup output config 00:05:56.532 00:25:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.532 00:25:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:56.791 00:25:30 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:56.791 00:25:30 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:56.791 00:25:30 -- setup/devices.sh@63 -- # found=1 00:05:56.791 00:25:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.791 00:25:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:56.791 00:25:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.050 00:25:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:57.050 00:25:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:57.984 00:25:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:57.985 00:25:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:57.985 00:25:31 -- setup/devices.sh@68 -- # return 0 00:05:57.985 00:25:31 -- setup/devices.sh@187 -- # cleanup_dm 00:05:57.985 00:25:31 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:57.985 00:25:31 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:57.985 00:25:31 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:57.985 00:25:31 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:57.985 00:25:31 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:57.985 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:57.985 00:25:31 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:57.985 00:25:31 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:58.243 00:05:58.243 real 0m6.137s 00:05:58.243 user 0m0.511s 00:05:58.243 sys 0m2.477s 00:05:58.243 00:25:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.243 00:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:58.243 ************************************ 00:05:58.243 END TEST dm_mount 00:05:58.243 ************************************ 00:05:58.243 00:25:31 -- setup/devices.sh@1 -- # cleanup 00:05:58.243 00:25:31 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:58.243 00:25:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:58.243 00:25:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.243 00:25:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:58.243 00:25:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.243 00:25:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:58.243 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:58.243 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:58.243 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:58.243 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:58.243 00:25:31 -- setup/devices.sh@12 -- # cleanup_dm 00:05:58.243 00:25:31 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.243 00:25:31 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:58.243 00:25:31 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.243 00:25:31 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:58.243 00:25:31 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.243 00:25:31 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:58.243 00:05:58.243 real 0m13.470s 00:05:58.243 user 0m1.732s 00:05:58.243 sys 0m6.584s 00:05:58.243 00:25:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.243 ************************************ 00:05:58.243 END TEST devices 00:05:58.243 ************************************ 00:05:58.243 00:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:58.243 00:05:58.243 real 0m28.967s 00:05:58.243 user 0m6.862s 00:05:58.243 sys 0m16.836s 00:05:58.243 00:25:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.243 00:25:31 -- common/autotest_common.sh@10 -- # set +x 00:05:58.243 ************************************ 00:05:58.243 END TEST setup.sh 00:05:58.243 ************************************ 00:05:58.243 00:25:31 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:58.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:58.820 Hugepages 00:05:58.820 node hugesize free / total 00:05:58.820 node0 1048576kB 0 / 0 00:05:58.820 node0 2048kB 2048 / 2048 00:05:58.820 00:05:58.820 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:58.820 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:58.820 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:58.820 00:25:32 -- spdk/autotest.sh@130 -- # uname -s 00:05:58.820 00:25:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:58.820 00:25:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:58.820 00:25:32 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:59.387 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.322 00:25:33 -- common/autotest_common.sh@1518 -- # sleep 1 00:06:01.696 00:25:34 -- common/autotest_common.sh@1519 -- # bdfs=() 00:06:01.696 00:25:34 -- common/autotest_common.sh@1519 -- # local bdfs 00:06:01.696 00:25:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:01.696 00:25:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:01.696 00:25:34 -- common/autotest_common.sh@1499 -- # bdfs=() 00:06:01.696 00:25:34 -- common/autotest_common.sh@1499 -- # local bdfs 00:06:01.696 00:25:34 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.696 00:25:34 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.696 00:25:34 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:06:01.696 00:25:34 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:06:01.696 00:25:34 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:06:01.696 00:25:34 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.696 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:01.696 Waiting for block devices as requested 00:06:01.954 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:01.954 00:25:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:01.954 00:25:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:06:01.954 00:25:35 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:06:01.954 00:25:35 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:01.954 00:25:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:01.954 00:25:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:01.954 00:25:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:01.954 00:25:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:01.954 00:25:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:01.954 00:25:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:01.954 00:25:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:01.954 00:25:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:01.954 00:25:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:01.954 00:25:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:01.954 00:25:35 -- common/autotest_common.sh@1543 -- # continue 00:06:01.954 00:25:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:01.954 00:25:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:01.954 00:25:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.954 00:25:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:01.954 00:25:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:01.954 00:25:35 -- common/autotest_common.sh@10 -- # set +x 00:06:01.954 00:25:35 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:02.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:06:02.471 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.408 00:25:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:03.408 00:25:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:03.408 00:25:36 -- common/autotest_common.sh@10 -- # set +x 00:06:03.408 00:25:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:03.408 00:25:36 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:06:03.408 00:25:36 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:06:03.408 00:25:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:03.408 00:25:36 -- common/autotest_common.sh@1563 -- # local bdfs 00:06:03.408 00:25:36 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:06:03.408 00:25:36 -- common/autotest_common.sh@1499 -- # bdfs=() 00:06:03.408 00:25:36 -- common/autotest_common.sh@1499 -- # local bdfs 00:06:03.408 00:25:36 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.408 00:25:36 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:03.409 00:25:36 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:06:03.669 00:25:37 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:06:03.669 00:25:37 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:06:03.669 00:25:37 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:06:03.669 00:25:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:03.669 00:25:37 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:03.669 00:25:37 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.669 00:25:37 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:06:03.669 00:25:37 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:06:03.669 00:25:37 -- common/autotest_common.sh@1579 -- # return 0 00:06:03.669 00:25:37 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:06:03.669 00:25:37 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:03.669 00:25:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.669 00:25:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.669 00:25:37 -- common/autotest_common.sh@10 -- # set +x 00:06:03.669 ************************************ 00:06:03.669 START TEST unittest 00:06:03.669 ************************************ 00:06:03.669 00:25:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:03.669 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:03.669 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:06:03.669 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:06:03.669 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:06:03.669 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:06:03.669 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:03.669 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:03.669 ++ rpc_py=rpc_cmd 00:06:03.669 ++ set -e 00:06:03.669 ++ shopt -s nullglob 00:06:03.669 ++ shopt -s extglob 00:06:03.669 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:03.669 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:03.669 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:03.669 +++ CONFIG_WPDK_DIR= 00:06:03.669 +++ CONFIG_ASAN=y 00:06:03.669 +++ CONFIG_VBDEV_COMPRESS=n 00:06:03.669 +++ CONFIG_HAVE_EXECINFO_H=y 00:06:03.669 +++ CONFIG_USDT=n 00:06:03.669 +++ CONFIG_CUSTOMOCF=n 00:06:03.669 +++ CONFIG_PREFIX=/usr/local 00:06:03.669 +++ CONFIG_RBD=n 00:06:03.669 +++ CONFIG_LIBDIR= 00:06:03.669 +++ CONFIG_IDXD=y 00:06:03.669 +++ CONFIG_NVME_CUSE=y 00:06:03.669 +++ CONFIG_SMA=n 00:06:03.669 +++ CONFIG_VTUNE=n 00:06:03.669 +++ CONFIG_TSAN=n 00:06:03.669 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:03.669 +++ CONFIG_VFIO_USER_DIR= 00:06:03.669 +++ CONFIG_PGO_CAPTURE=n 00:06:03.669 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:03.669 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:03.669 +++ CONFIG_LTO=n 00:06:03.669 +++ CONFIG_ISCSI_INITIATOR=y 00:06:03.669 +++ CONFIG_CET=n 00:06:03.669 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:03.669 +++ CONFIG_OCF_PATH= 00:06:03.669 +++ CONFIG_RDMA_SET_TOS=y 00:06:03.669 +++ CONFIG_HAVE_ARC4RANDOM=n 00:06:03.669 +++ CONFIG_HAVE_LIBARCHIVE=n 00:06:03.669 +++ CONFIG_UBLK=n 00:06:03.669 +++ CONFIG_ISAL_CRYPTO=y 00:06:03.669 +++ CONFIG_OPENSSL_PATH= 00:06:03.669 +++ CONFIG_OCF=n 00:06:03.669 +++ CONFIG_FUSE=n 00:06:03.669 +++ CONFIG_VTUNE_DIR= 00:06:03.669 +++ CONFIG_FUZZER_LIB= 00:06:03.669 +++ CONFIG_FUZZER=n 00:06:03.669 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:03.669 +++ CONFIG_CRYPTO=n 00:06:03.669 +++ CONFIG_PGO_USE=n 00:06:03.669 +++ CONFIG_VHOST=y 00:06:03.669 +++ CONFIG_DAOS=n 00:06:03.669 +++ CONFIG_DPDK_INC_DIR= 00:06:03.669 +++ CONFIG_DAOS_DIR= 00:06:03.669 +++ CONFIG_UNIT_TESTS=y 00:06:03.669 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:03.669 +++ CONFIG_VIRTIO=y 00:06:03.669 +++ CONFIG_COVERAGE=y 00:06:03.669 +++ CONFIG_RDMA=y 00:06:03.669 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:03.669 +++ CONFIG_URING_PATH= 00:06:03.669 +++ CONFIG_XNVME=n 00:06:03.669 +++ CONFIG_VFIO_USER=n 00:06:03.669 +++ CONFIG_ARCH=native 00:06:03.669 +++ CONFIG_HAVE_EVP_MAC=y 00:06:03.669 +++ CONFIG_URING_ZNS=n 00:06:03.669 +++ CONFIG_WERROR=y 00:06:03.669 +++ CONFIG_HAVE_LIBBSD=n 00:06:03.669 +++ CONFIG_UBSAN=y 00:06:03.669 +++ CONFIG_IPSEC_MB_DIR= 00:06:03.669 +++ CONFIG_GOLANG=n 00:06:03.669 +++ CONFIG_ISAL=y 00:06:03.669 +++ CONFIG_IDXD_KERNEL=n 00:06:03.669 +++ CONFIG_DPDK_LIB_DIR= 00:06:03.669 +++ CONFIG_RDMA_PROV=verbs 00:06:03.669 +++ CONFIG_APPS=y 00:06:03.669 +++ CONFIG_SHARED=n 00:06:03.669 +++ CONFIG_HAVE_KEYUTILS=y 00:06:03.669 +++ CONFIG_FC_PATH= 00:06:03.669 +++ CONFIG_DPDK_PKG_CONFIG=n 00:06:03.669 +++ CONFIG_FC=n 00:06:03.669 +++ CONFIG_AVAHI=n 00:06:03.669 +++ CONFIG_FIO_PLUGIN=y 00:06:03.669 +++ CONFIG_RAID5F=y 00:06:03.669 +++ CONFIG_EXAMPLES=y 00:06:03.669 +++ CONFIG_TESTS=y 00:06:03.669 +++ CONFIG_CRYPTO_MLX5=n 00:06:03.669 +++ CONFIG_MAX_LCORES= 00:06:03.669 +++ CONFIG_IPSEC_MB=n 00:06:03.669 +++ CONFIG_PGO_DIR= 00:06:03.669 +++ CONFIG_DEBUG=y 00:06:03.669 +++ CONFIG_DPDK_COMPRESSDEV=n 00:06:03.669 +++ CONFIG_CROSS_PREFIX= 00:06:03.669 +++ CONFIG_URING=n 00:06:03.669 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:03.669 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:03.669 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:03.669 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:03.669 +++ _root=/home/vagrant/spdk_repo/spdk 00:06:03.669 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:03.669 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:03.669 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:03.669 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:03.669 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:03.669 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:03.669 +++ VHOST_APP=("$_app_dir/vhost") 00:06:03.669 +++ DD_APP=("$_app_dir/spdk_dd") 00:06:03.669 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:06:03.669 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:03.669 +++ [[ #ifndef SPDK_CONFIG_H 00:06:03.669 #define SPDK_CONFIG_H 00:06:03.669 #define SPDK_CONFIG_APPS 1 00:06:03.669 #define SPDK_CONFIG_ARCH native 00:06:03.669 #define SPDK_CONFIG_ASAN 1 00:06:03.669 #undef SPDK_CONFIG_AVAHI 00:06:03.669 #undef SPDK_CONFIG_CET 00:06:03.669 #define SPDK_CONFIG_COVERAGE 1 00:06:03.669 #define SPDK_CONFIG_CROSS_PREFIX 00:06:03.669 #undef SPDK_CONFIG_CRYPTO 00:06:03.669 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:03.669 #undef SPDK_CONFIG_CUSTOMOCF 00:06:03.669 #undef SPDK_CONFIG_DAOS 00:06:03.669 #define SPDK_CONFIG_DAOS_DIR 00:06:03.669 #define SPDK_CONFIG_DEBUG 1 00:06:03.669 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:03.669 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:03.669 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:03.669 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:03.669 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:03.669 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:03.669 #define SPDK_CONFIG_EXAMPLES 1 00:06:03.669 #undef SPDK_CONFIG_FC 00:06:03.669 #define SPDK_CONFIG_FC_PATH 00:06:03.669 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:03.669 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:03.669 #undef SPDK_CONFIG_FUSE 00:06:03.669 #undef SPDK_CONFIG_FUZZER 00:06:03.669 #define SPDK_CONFIG_FUZZER_LIB 00:06:03.669 #undef SPDK_CONFIG_GOLANG 00:06:03.669 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:06:03.669 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:03.669 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:03.669 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:03.669 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:03.669 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:03.669 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:03.669 #define SPDK_CONFIG_IDXD 1 00:06:03.669 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:03.669 #undef SPDK_CONFIG_IPSEC_MB 00:06:03.669 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:03.669 #define SPDK_CONFIG_ISAL 1 00:06:03.669 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:03.669 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:03.669 #define SPDK_CONFIG_LIBDIR 00:06:03.669 #undef SPDK_CONFIG_LTO 00:06:03.669 #define SPDK_CONFIG_MAX_LCORES 00:06:03.669 #define SPDK_CONFIG_NVME_CUSE 1 00:06:03.669 #undef SPDK_CONFIG_OCF 00:06:03.669 #define SPDK_CONFIG_OCF_PATH 00:06:03.669 #define SPDK_CONFIG_OPENSSL_PATH 00:06:03.670 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:03.670 #define SPDK_CONFIG_PGO_DIR 00:06:03.670 #undef SPDK_CONFIG_PGO_USE 00:06:03.670 #define SPDK_CONFIG_PREFIX /usr/local 00:06:03.670 #define SPDK_CONFIG_RAID5F 1 00:06:03.670 #undef SPDK_CONFIG_RBD 00:06:03.670 #define SPDK_CONFIG_RDMA 1 00:06:03.670 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:03.670 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:03.670 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:03.670 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:03.670 #undef SPDK_CONFIG_SHARED 00:06:03.670 #undef SPDK_CONFIG_SMA 00:06:03.670 #define SPDK_CONFIG_TESTS 1 00:06:03.670 #undef SPDK_CONFIG_TSAN 00:06:03.670 #undef SPDK_CONFIG_UBLK 00:06:03.670 #define SPDK_CONFIG_UBSAN 1 00:06:03.670 #define SPDK_CONFIG_UNIT_TESTS 1 00:06:03.670 #undef SPDK_CONFIG_URING 00:06:03.670 #define SPDK_CONFIG_URING_PATH 00:06:03.670 #undef SPDK_CONFIG_URING_ZNS 00:06:03.670 #undef SPDK_CONFIG_USDT 00:06:03.670 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:03.670 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:03.670 #undef SPDK_CONFIG_VFIO_USER 00:06:03.670 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:03.670 #define SPDK_CONFIG_VHOST 1 00:06:03.670 #define SPDK_CONFIG_VIRTIO 1 00:06:03.670 #undef SPDK_CONFIG_VTUNE 00:06:03.670 #define SPDK_CONFIG_VTUNE_DIR 00:06:03.670 #define SPDK_CONFIG_WERROR 1 00:06:03.670 #define SPDK_CONFIG_WPDK_DIR 00:06:03.670 #undef SPDK_CONFIG_XNVME 00:06:03.670 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:03.670 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:03.670 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:03.670 +++ [[ -e /bin/wpdk_common.sh ]] 00:06:03.670 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.670 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.670 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:03.670 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:03.670 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:03.670 ++++ export PATH 00:06:03.670 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:06:03.670 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:03.670 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:03.670 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:03.670 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:03.670 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:03.670 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:03.670 +++ TEST_TAG=N/A 00:06:03.670 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:03.670 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:03.670 ++++ uname -s 00:06:03.670 +++ PM_OS=Linux 00:06:03.670 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:03.670 +++ [[ Linux == FreeBSD ]] 00:06:03.670 +++ [[ Linux == Linux ]] 00:06:03.670 +++ [[ QEMU != QEMU ]] 00:06:03.670 +++ MONITOR_RESOURCES_PIDS=() 00:06:03.670 +++ declare -A MONITOR_RESOURCES_PIDS 00:06:03.670 +++ mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:06:03.670 ++ : 0 00:06:03.670 ++ export RUN_NIGHTLY 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_RUN_VALGRIND 00:06:03.670 ++ : 1 00:06:03.670 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:06:03.670 ++ : 1 00:06:03.670 ++ export SPDK_TEST_UNITTEST 00:06:03.670 ++ : 00:06:03.670 ++ export SPDK_TEST_AUTOBUILD 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_RELEASE_BUILD 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_ISAL 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_ISCSI 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_ISCSI_INITIATOR 00:06:03.670 ++ : 1 00:06:03.670 ++ export SPDK_TEST_NVME 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_NVME_PMR 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_NVME_BP 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_NVME_CLI 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_NVME_CUSE 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_NVME_FDP 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_NVMF 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_VFIOUSER 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_VFIOUSER_QEMU 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_FUZZER 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_FUZZER_SHORT 00:06:03.670 ++ : rdma 00:06:03.670 ++ export SPDK_TEST_NVMF_TRANSPORT 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_RBD 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_VHOST 00:06:03.670 ++ : 1 00:06:03.670 ++ export SPDK_TEST_BLOCKDEV 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_IOAT 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_BLOBFS 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_VHOST_INIT 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_LVOL 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_VBDEV_COMPRESS 00:06:03.670 ++ : 1 00:06:03.670 ++ export SPDK_RUN_ASAN 00:06:03.670 ++ : 1 00:06:03.670 ++ export SPDK_RUN_UBSAN 00:06:03.670 ++ : 00:06:03.670 ++ export SPDK_RUN_EXTERNAL_DPDK 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_RUN_NON_ROOT 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_CRYPTO 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_FTL 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_OCF 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_VMD 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_OPAL 00:06:03.670 ++ : 00:06:03.670 ++ export SPDK_TEST_NATIVE_DPDK 00:06:03.670 ++ : true 00:06:03.670 ++ export SPDK_AUTOTEST_X 00:06:03.670 ++ : 1 00:06:03.670 ++ export SPDK_TEST_RAID5 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_URING 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_USDT 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_USE_IGB_UIO 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_SCHEDULER 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_SCANBUILD 00:06:03.670 ++ : 00:06:03.670 ++ export SPDK_TEST_NVMF_NICS 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_SMA 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_DAOS 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_XNVME 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_ACCEL_DSA 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_ACCEL_IAA 00:06:03.670 ++ : 00:06:03.670 ++ export SPDK_TEST_FUZZER_TARGET 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_TEST_NVMF_MDNS 00:06:03.670 ++ : 0 00:06:03.670 ++ export SPDK_JSONRPC_GO_CLIENT 00:06:03.670 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:03.670 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:03.670 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:03.670 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:03.670 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:03.670 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:03.670 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:03.670 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:03.670 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.670 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:06:03.670 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:03.670 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:03.670 ++ export PYTHONDONTWRITEBYTECODE=1 00:06:03.670 ++ PYTHONDONTWRITEBYTECODE=1 00:06:03.670 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.670 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:03.670 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.670 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:03.670 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:06:03.670 ++ rm -rf /var/tmp/asan_suppression_file 00:06:03.670 ++ cat 00:06:03.670 ++ echo leak:libfuse3.so 00:06:03.670 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.670 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:03.670 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.670 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:03.670 ++ '[' -z /var/spdk/dependencies ']' 00:06:03.670 ++ export DEPENDENCY_DIR 00:06:03.670 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:03.670 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:03.670 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:03.670 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:03.670 ++ export QEMU_BIN= 00:06:03.670 ++ QEMU_BIN= 00:06:03.670 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:03.671 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:06:03.671 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:03.671 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:03.671 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.671 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:03.671 ++ '[' 0 -eq 0 ']' 00:06:03.671 ++ export valgrind= 00:06:03.671 ++ valgrind= 00:06:03.671 +++ uname -s 00:06:03.671 ++ '[' Linux = Linux ']' 00:06:03.671 ++ HUGEMEM=4096 00:06:03.671 ++ export CLEAR_HUGE=yes 00:06:03.671 ++ CLEAR_HUGE=yes 00:06:03.671 ++ [[ 0 -eq 1 ]] 00:06:03.671 ++ [[ 0 -eq 1 ]] 00:06:03.671 ++ MAKE=make 00:06:03.671 +++ nproc 00:06:03.671 ++ MAKEFLAGS=-j10 00:06:03.671 ++ export HUGEMEM=4096 00:06:03.671 ++ HUGEMEM=4096 00:06:03.671 ++ NO_HUGE=() 00:06:03.671 ++ TEST_MODE= 00:06:03.671 ++ [[ -z '' ]] 00:06:03.671 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:03.671 ++ exec 00:06:03.671 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:06:03.671 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:06:03.671 ++ set_test_storage 2147483648 00:06:03.671 ++ [[ -v testdir ]] 00:06:03.671 ++ local requested_size=2147483648 00:06:03.671 ++ local mount target_dir 00:06:03.671 ++ local -A mounts fss sizes avails uses 00:06:03.671 ++ local source fs size avail mount use 00:06:03.671 ++ local storage_fallback storage_candidates 00:06:03.671 +++ mktemp -udt spdk.XXXXXX 00:06:03.671 ++ storage_fallback=/tmp/spdk.bp3pC7 00:06:03.671 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:03.671 ++ [[ -n '' ]] 00:06:03.671 ++ [[ -n '' ]] 00:06:03.671 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.bp3pC7/tests/unit /tmp/spdk.bp3pC7 00:06:03.671 ++ requested_size=2214592512 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 +++ df -T 00:06:03.671 +++ grep -v Filesystem 00:06:03.671 ++ mounts["$mount"]=tmpfs 00:06:03.671 ++ fss["$mount"]=tmpfs 00:06:03.671 ++ avails["$mount"]=1252601856 00:06:03.671 ++ sizes["$mount"]=1253683200 00:06:03.671 ++ uses["$mount"]=1081344 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 ++ mounts["$mount"]=/dev/vda1 00:06:03.671 ++ fss["$mount"]=ext4 00:06:03.671 ++ avails["$mount"]=10374168576 00:06:03.671 ++ sizes["$mount"]=20616794112 00:06:03.671 ++ uses["$mount"]=10225848320 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 ++ mounts["$mount"]=tmpfs 00:06:03.671 ++ fss["$mount"]=tmpfs 00:06:03.671 ++ avails["$mount"]=6268399616 00:06:03.671 ++ sizes["$mount"]=6268399616 00:06:03.671 ++ uses["$mount"]=0 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 ++ mounts["$mount"]=tmpfs 00:06:03.671 ++ fss["$mount"]=tmpfs 00:06:03.671 ++ avails["$mount"]=5242880 00:06:03.671 ++ sizes["$mount"]=5242880 00:06:03.671 ++ uses["$mount"]=0 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 ++ mounts["$mount"]=/dev/vda15 00:06:03.671 ++ fss["$mount"]=vfat 00:06:03.671 ++ avails["$mount"]=103061504 00:06:03.671 ++ sizes["$mount"]=109395968 00:06:03.671 ++ uses["$mount"]=6334464 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 ++ mounts["$mount"]=tmpfs 00:06:03.671 ++ fss["$mount"]=tmpfs 00:06:03.671 ++ avails["$mount"]=1253675008 00:06:03.671 ++ sizes["$mount"]=1253679104 00:06:03.671 ++ uses["$mount"]=4096 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:06:03.671 ++ fss["$mount"]=fuse.sshfs 00:06:03.671 ++ avails["$mount"]=95751114752 00:06:03.671 ++ sizes["$mount"]=105088212992 00:06:03.671 ++ uses["$mount"]=3951665152 00:06:03.671 ++ read -r source fs size use avail _ mount 00:06:03.671 ++ printf '* Looking for test storage...\n' 00:06:03.671 * Looking for test storage... 00:06:03.671 ++ local target_space new_size 00:06:03.671 ++ for target_dir in "${storage_candidates[@]}" 00:06:03.671 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:06:03.671 +++ awk '$1 !~ /Filesystem/{print $6}' 00:06:03.671 ++ mount=/ 00:06:03.671 ++ target_space=10374168576 00:06:03.671 ++ (( target_space == 0 || target_space < requested_size )) 00:06:03.671 ++ (( target_space >= requested_size )) 00:06:03.671 ++ [[ ext4 == tmpfs ]] 00:06:03.671 ++ [[ ext4 == ramfs ]] 00:06:03.671 ++ [[ / == / ]] 00:06:03.671 ++ new_size=12440440832 00:06:03.671 ++ (( new_size * 100 / sizes[/] > 95 )) 00:06:03.671 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:03.671 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:06:03.671 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:06:03.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:06:03.671 ++ return 0 00:06:03.671 ++ set -o errtrace 00:06:03.671 ++ shopt -s extdebug 00:06:03.671 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:06:03.671 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:03.671 00:25:37 -- common/autotest_common.sh@1673 -- # true 00:06:03.671 00:25:37 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:03.671 00:25:37 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:06:03.671 00:25:37 -- common/autotest_common.sh@29 -- # exec 00:06:03.671 00:25:37 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:03.671 00:25:37 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:03.671 00:25:37 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:03.671 00:25:37 -- common/autotest_common.sh@18 -- # set -x 00:06:03.671 00:25:37 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:06:03.671 00:25:37 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:06:03.671 00:25:37 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:06:03.671 00:25:37 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:06:03.671 00:25:37 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:06:03.671 00:25:37 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:06:03.671 00:25:37 -- unit/unittest.sh@179 -- # hash lcov 00:06:03.671 00:25:37 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:03.671 00:25:37 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:03.671 00:25:37 -- unit/unittest.sh@180 -- # cov_avail=yes 00:06:03.671 00:25:37 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:06:03.671 00:25:37 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:06:03.671 00:25:37 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:03.671 00:25:37 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:03.671 00:25:37 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:06:03.671 --rc lcov_branch_coverage=1 00:06:03.671 --rc lcov_function_coverage=1 00:06:03.671 --rc genhtml_branch_coverage=1 00:06:03.671 --rc genhtml_function_coverage=1 00:06:03.671 --rc genhtml_legend=1 00:06:03.671 --rc geninfo_all_blocks=1 00:06:03.671 ' 00:06:03.671 00:25:37 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:06:03.671 --rc lcov_branch_coverage=1 00:06:03.671 --rc lcov_function_coverage=1 00:06:03.671 --rc genhtml_branch_coverage=1 00:06:03.671 --rc genhtml_function_coverage=1 00:06:03.671 --rc genhtml_legend=1 00:06:03.671 --rc geninfo_all_blocks=1 00:06:03.671 ' 00:06:03.671 00:25:37 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:06:03.671 --rc lcov_branch_coverage=1 00:06:03.671 --rc lcov_function_coverage=1 00:06:03.671 --rc genhtml_branch_coverage=1 00:06:03.671 --rc genhtml_function_coverage=1 00:06:03.671 --rc genhtml_legend=1 00:06:03.671 --rc geninfo_all_blocks=1 00:06:03.671 --no-external' 00:06:03.671 00:25:37 -- unit/unittest.sh@200 -- # LCOV='lcov 00:06:03.671 --rc lcov_branch_coverage=1 00:06:03.671 --rc lcov_function_coverage=1 00:06:03.671 --rc genhtml_branch_coverage=1 00:06:03.671 --rc genhtml_function_coverage=1 00:06:03.671 --rc genhtml_legend=1 00:06:03.671 --rc geninfo_all_blocks=1 00:06:03.671 --no-external' 00:06:03.671 00:25:37 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:08.943 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:08.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:18.948 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:18.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:18.949 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:18.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:18.949 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:18.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:45.523 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:45.523 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:45.524 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:45.524 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:45.525 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:45.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:47.426 00:26:20 -- unit/unittest.sh@206 -- # uname -m 00:06:47.426 00:26:20 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:47.426 00:26:20 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:47.426 00:26:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.426 00:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.426 00:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:47.426 ************************************ 00:06:47.426 START TEST unittest_pci_event 00:06:47.426 ************************************ 00:06:47.426 00:26:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:47.426 00:06:47.426 00:06:47.426 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.426 http://cunit.sourceforge.net/ 00:06:47.426 00:06:47.426 00:06:47.426 Suite: pci_event 00:06:47.426 Test: test_pci_parse_event ...[2024-04-27 00:26:20.945817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:47.426 [2024-04-27 00:26:20.946581] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:47.426 passed 00:06:47.426 00:06:47.426 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.426 suites 1 1 n/a 0 0 00:06:47.427 tests 1 1 1 0 0 00:06:47.427 asserts 15 15 15 0 n/a 00:06:47.427 00:06:47.427 Elapsed time = 0.001 seconds 00:06:47.427 00:06:47.427 real 0m0.030s 00:06:47.427 user 0m0.012s 00:06:47.427 sys 0m0.016s 00:06:47.427 00:26:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.427 00:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:47.427 ************************************ 00:06:47.427 END TEST unittest_pci_event 00:06:47.427 ************************************ 00:06:47.427 00:26:20 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:47.427 00:26:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.427 00:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.427 00:26:20 -- common/autotest_common.sh@10 -- # set +x 00:06:47.685 ************************************ 00:06:47.685 START TEST unittest_include 00:06:47.685 ************************************ 00:06:47.685 00:26:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:47.685 00:06:47.685 00:06:47.685 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.685 http://cunit.sourceforge.net/ 00:06:47.685 00:06:47.685 00:06:47.685 Suite: histogram 00:06:47.685 Test: histogram_test ...passed 00:06:47.685 Test: histogram_merge ...passed 00:06:47.685 00:06:47.685 Run Summary: Type Total Ran Passed Failed Inactive 00:06:47.685 suites 1 1 n/a 0 0 00:06:47.685 tests 2 2 2 0 0 00:06:47.685 asserts 50 50 50 0 n/a 00:06:47.685 00:06:47.685 Elapsed time = 0.006 seconds 00:06:47.685 00:06:47.685 real 0m0.034s 00:06:47.685 user 0m0.017s 00:06:47.685 sys 0m0.017s 00:06:47.685 00:26:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.685 00:26:21 -- common/autotest_common.sh@10 -- # set +x 00:06:47.685 ************************************ 00:06:47.685 END TEST unittest_include 00:06:47.685 ************************************ 00:06:47.685 00:26:21 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:47.685 00:26:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.685 00:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.685 00:26:21 -- common/autotest_common.sh@10 -- # set +x 00:06:47.685 ************************************ 00:06:47.685 START TEST unittest_bdev 00:06:47.685 ************************************ 00:06:47.685 00:26:21 -- common/autotest_common.sh@1111 -- # unittest_bdev 00:06:47.685 00:26:21 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:47.685 00:06:47.685 00:06:47.685 CUnit - A unit testing framework for C - Version 2.1-3 00:06:47.685 http://cunit.sourceforge.net/ 00:06:47.685 00:06:47.685 00:06:47.685 Suite: bdev 00:06:47.685 Test: bytes_to_blocks_test ...passed 00:06:47.685 Test: num_blocks_test ...passed 00:06:47.685 Test: io_valid_test ...passed 00:06:47.685 Test: open_write_test ...[2024-04-27 00:26:21.262916] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8005:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:47.685 [2024-04-27 00:26:21.263276] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8005:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:47.685 [2024-04-27 00:26:21.263452] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8005:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:47.944 passed 00:06:47.944 Test: claim_test ...passed 00:06:47.944 Test: alias_add_del_test ...[2024-04-27 00:26:21.351735] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4551:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:47.944 [2024-04-27 00:26:21.351886] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4581:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:47.944 [2024-04-27 00:26:21.351944] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4551:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:47.944 passed 00:06:47.944 Test: get_device_stat_test ...passed 00:06:47.944 Test: bdev_io_types_test ...passed 00:06:47.944 Test: bdev_io_wait_test ...passed 00:06:47.944 Test: bdev_io_spans_split_test ...passed 00:06:47.944 Test: bdev_io_boundary_split_test ...passed 00:06:47.944 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-27 00:26:21.505550] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3188:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:47.944 passed 00:06:48.203 Test: bdev_io_mix_split_test ...passed 00:06:48.203 Test: bdev_io_split_with_io_wait ...passed 00:06:48.203 Test: bdev_io_write_unit_split_test ...[2024-04-27 00:26:21.607325] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2741:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:48.203 [2024-04-27 00:26:21.607452] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2741:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:48.203 [2024-04-27 00:26:21.607489] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2741:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:48.203 [2024-04-27 00:26:21.607544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2741:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:48.203 passed 00:06:48.203 Test: bdev_io_alignment_with_boundary ...passed 00:06:48.203 Test: bdev_io_alignment ...passed 00:06:48.203 Test: bdev_histograms ...passed 00:06:48.203 Test: bdev_write_zeroes ...passed 00:06:48.460 Test: bdev_compare_and_write ...passed 00:06:48.461 Test: bdev_compare ...passed 00:06:48.461 Test: bdev_compare_emulated ...passed 00:06:48.461 Test: bdev_zcopy_write ...passed 00:06:48.461 Test: bdev_zcopy_read ...passed 00:06:48.461 Test: bdev_open_while_hotremove ...passed 00:06:48.461 Test: bdev_close_while_hotremove ...passed 00:06:48.461 Test: bdev_open_ext_test ...[2024-04-27 00:26:21.986753] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:48.461 passed 00:06:48.461 Test: bdev_open_ext_unregister ...passed 00:06:48.461 Test: bdev_set_io_timeout ...[2024-04-27 00:26:21.986997] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8111:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:48.461 passed 00:06:48.718 Test: bdev_set_qd_sampling ...passed 00:06:48.718 Test: lba_range_overlap ...passed 00:06:48.718 Test: lock_lba_range_check_ranges ...passed 00:06:48.718 Test: lock_lba_range_with_io_outstanding ...passed 00:06:48.718 Test: lock_lba_range_overlapped ...passed 00:06:48.718 Test: bdev_quiesce ...[2024-04-27 00:26:22.154929] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10034:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:48.718 passed 00:06:48.718 Test: bdev_io_abort ...passed 00:06:48.718 Test: bdev_unmap ...passed 00:06:48.718 Test: bdev_write_zeroes_split_test ...passed 00:06:48.718 Test: bdev_set_options_test ...[2024-04-27 00:26:22.265170] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 484:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:48.718 passed 00:06:48.718 Test: bdev_get_memory_domains ...passed 00:06:48.718 Test: bdev_io_ext ...passed 00:06:48.977 Test: bdev_io_ext_no_opts ...passed 00:06:48.977 Test: bdev_io_ext_invalid_opts ...passed 00:06:48.977 Test: bdev_io_ext_split ...passed 00:06:48.977 Test: bdev_io_ext_bounce_buffer ...passed 00:06:48.977 Test: bdev_register_uuid_alias ...[2024-04-27 00:26:22.438586] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 57392408-f908-4c08-b4c7-ef74c2641851 already exists 00:06:48.977 [2024-04-27 00:26:22.438698] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:57392408-f908-4c08-b4c7-ef74c2641851 alias for bdev bdev0 00:06:48.977 passed 00:06:48.977 Test: bdev_unregister_by_name ...[2024-04-27 00:26:22.453377] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7901:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:48.977 passed 00:06:48.977 Test: for_each_bdev_test ...[2024-04-27 00:26:22.453450] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7909:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:48.977 passed 00:06:48.977 Test: bdev_seek_test ...passed 00:06:48.977 Test: bdev_copy ...passed 00:06:48.977 Test: bdev_copy_split_test ...passed 00:06:48.977 Test: examine_locks ...passed 00:06:48.977 Test: claim_v2_rwo ...[2024-04-27 00:26:22.550003] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8005:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550083] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8635:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550103] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550171] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550191] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8472:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550246] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8630:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:48.977 passed 00:06:48.977 Test: claim_v2_rom ...[2024-04-27 00:26:22.550417] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8005:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550472] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550517] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8472:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:48.977 passed 00:06:48.977 Test: claim_v2_rwm ...[2024-04-27 00:26:22.550560] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8673:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:48.977 [2024-04-27 00:26:22.550593] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8668:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:48.977 [2024-04-27 00:26:22.550709] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:48.977 [2024-04-27 00:26:22.550762] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8005:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550787] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550812] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550831] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8472:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550857] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8723:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.550896] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:48.977 passed 00:06:48.977 Test: claim_v2_existing_writer ...[2024-04-27 00:26:22.551048] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8668:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:48.977 passed 00:06:48.977 Test: claim_v2_existing_v1 ...[2024-04-27 00:26:22.551081] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8668:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:48.977 passed 00:06:48.977 Test: claim_v1_existing_v2 ...[2024-04-27 00:26:22.551204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.551236] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.551255] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:48.977 passed 00:06:48.977 Test: examine_claimed ...[2024-04-27 00:26:22.551368] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8472:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.551422] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8472:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.551457] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8472:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:48.977 [2024-04-27 00:26:22.551724] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8800:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:48.977 passed 00:06:48.977 00:06:48.977 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.977 suites 1 1 n/a 0 0 00:06:48.977 tests 59 59 59 0 0 00:06:48.977 asserts 4599 4599 4599 0 n/a 00:06:48.977 00:06:48.977 Elapsed time = 1.364 seconds 00:06:49.236 00:26:22 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:49.236 00:06:49.236 00:06:49.236 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.236 http://cunit.sourceforge.net/ 00:06:49.236 00:06:49.236 00:06:49.236 Suite: nvme 00:06:49.236 Test: test_create_ctrlr ...passed 00:06:49.236 Test: test_reset_ctrlr ...[2024-04-27 00:26:22.605192] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 passed 00:06:49.236 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:49.236 Test: test_failover_ctrlr ...passed 00:06:49.236 Test: test_race_between_failover_and_add_secondary_trid ...[2024-04-27 00:26:22.607926] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.608258] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.608497] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 passed 00:06:49.236 Test: test_pending_reset ...[2024-04-27 00:26:22.610210] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.610497] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 passed 00:06:49.236 Test: test_attach_ctrlr ...[2024-04-27 00:26:22.611714] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:49.236 passed 00:06:49.236 Test: test_aer_cb ...passed 00:06:49.236 Test: test_submit_nvme_cmd ...passed 00:06:49.236 Test: test_add_remove_trid ...passed 00:06:49.236 Test: test_abort ...[2024-04-27 00:26:22.615345] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7392:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:49.236 passed 00:06:49.236 Test: test_get_io_qpair ...passed 00:06:49.236 Test: test_bdev_unregister ...passed 00:06:49.236 Test: test_compare_ns ...passed 00:06:49.236 Test: test_init_ana_log_page ...passed 00:06:49.236 Test: test_get_memory_domains ...passed 00:06:49.236 Test: test_reconnect_qpair ...[2024-04-27 00:26:22.618175] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 passed 00:06:49.236 Test: test_create_bdev_ctrlr ...[2024-04-27 00:26:22.618743] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5340:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:49.236 passed 00:06:49.236 Test: test_add_multi_ns_to_bdev ...[2024-04-27 00:26:22.620024] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4532:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:49.236 passed 00:06:49.236 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:49.236 Test: test_admin_path ...passed 00:06:49.236 Test: test_reset_bdev_ctrlr ...passed 00:06:49.236 Test: test_find_io_path ...passed 00:06:49.236 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:49.236 Test: test_retry_io_for_io_path_error ...passed 00:06:49.236 Test: test_retry_io_count ...passed 00:06:49.236 Test: test_concurrent_read_ana_log_page ...passed 00:06:49.236 Test: test_retry_io_for_ana_error ...passed 00:06:49.236 Test: test_check_io_error_resiliency_params ...[2024-04-27 00:26:22.627797] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6022:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:49.236 [2024-04-27 00:26:22.627876] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6026:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:49.236 [2024-04-27 00:26:22.627910] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6035:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:49.236 [2024-04-27 00:26:22.627946] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6038:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:49.236 [2024-04-27 00:26:22.627992] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6050:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:49.236 [2024-04-27 00:26:22.628027] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6050:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:49.236 [2024-04-27 00:26:22.628058] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6030:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:49.236 [2024-04-27 00:26:22.628123] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6045:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:49.236 [2024-04-27 00:26:22.628164] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6042:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:49.236 passed 00:06:49.236 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:49.236 Test: test_reconnect_ctrlr ...[2024-04-27 00:26:22.629079] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.629268] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.629528] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.629685] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.629883] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 passed 00:06:49.236 Test: test_retry_failover_ctrlr ...[2024-04-27 00:26:22.630269] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 passed 00:06:49.236 Test: test_fail_path ...[2024-04-27 00:26:22.630922] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.631100] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.631242] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.236 [2024-04-27 00:26:22.631367] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.237 [2024-04-27 00:26:22.631531] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.237 passed 00:06:49.237 Test: test_nvme_ns_cmp ...passed 00:06:49.237 Test: test_ana_transition ...passed 00:06:49.237 Test: test_set_preferred_path ...passed 00:06:49.237 Test: test_find_next_io_path ...passed 00:06:49.237 Test: test_find_io_path_min_qd ...passed 00:06:49.237 Test: test_disable_auto_failback ...[2024-04-27 00:26:22.633388] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.237 passed 00:06:49.237 Test: test_set_multipath_policy ...passed 00:06:49.237 Test: test_uuid_generation ...passed 00:06:49.237 Test: test_retry_io_to_same_path ...passed 00:06:49.237 Test: test_race_between_reset_and_disconnected ...passed 00:06:49.237 Test: test_ctrlr_op_rpc ...passed 00:06:49.237 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:49.237 Test: test_disable_enable_ctrlr ...[2024-04-27 00:26:22.637311] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.237 [2024-04-27 00:26:22.637502] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:49.237 passed 00:06:49.237 Test: test_delete_ctrlr_done ...passed 00:06:49.237 Test: test_ns_remove_during_reset ...passed 00:06:49.237 00:06:49.237 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.237 suites 1 1 n/a 0 0 00:06:49.237 tests 48 48 48 0 0 00:06:49.237 asserts 3565 3565 3565 0 n/a 00:06:49.237 00:06:49.237 Elapsed time = 0.035 seconds 00:06:49.237 00:26:22 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:49.237 00:06:49.237 00:06:49.237 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.237 http://cunit.sourceforge.net/ 00:06:49.237 00:06:49.237 Test Options 00:06:49.237 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:06:49.237 00:06:49.237 Suite: raid 00:06:49.237 Test: test_create_raid ...passed 00:06:49.237 Test: test_create_raid_superblock ...passed 00:06:49.237 Test: test_delete_raid ...passed 00:06:49.237 Test: test_create_raid_invalid_args ...[2024-04-27 00:26:22.680097] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:49.237 [2024-04-27 00:26:22.680545] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:49.237 [2024-04-27 00:26:22.681027] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:49.237 [2024-04-27 00:26:22.681281] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:49.237 [2024-04-27 00:26:22.682092] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:49.237 passed 00:06:49.237 Test: test_delete_raid_invalid_args ...passed 00:06:49.237 Test: test_io_channel ...passed 00:06:49.237 Test: test_reset_io ...passed 00:06:49.237 Test: test_write_io ...passed 00:06:49.237 Test: test_read_io ...passed 00:06:50.174 Test: test_unmap_io ...passed 00:06:50.174 Test: test_io_failure ...[2024-04-27 00:26:23.453052] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:50.174 passed 00:06:50.174 Test: test_multi_raid_no_io ...passed 00:06:50.174 Test: test_multi_raid_with_io ...passed 00:06:50.174 Test: test_io_type_supported ...passed 00:06:50.174 Test: test_raid_json_dump_info ...passed 00:06:50.174 Test: test_context_size ...passed 00:06:50.174 Test: test_raid_level_conversions ...passed 00:06:50.174 Test: test_raid_io_split ...passedTest Options 00:06:50.174 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 1 00:06:50.174 00:06:50.174 Suite: raid_dif 00:06:50.174 Test: test_create_raid ...passed 00:06:50.174 Test: test_create_raid_superblock ...passed 00:06:50.174 Test: test_delete_raid ...passed 00:06:50.174 Test: test_create_raid_invalid_args ...[2024-04-27 00:26:23.460178] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1487:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:50.174 [2024-04-27 00:26:23.460318] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:50.174 [2024-04-27 00:26:23.460541] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1471:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:50.174 [2024-04-27 00:26:23.460615] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:50.174 [2024-04-27 00:26:23.461135] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3113:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:50.174 passed 00:06:50.174 Test: test_delete_raid_invalid_args ...passed 00:06:50.174 Test: test_io_channel ...passed 00:06:50.174 Test: test_reset_io ...passed 00:06:50.174 Test: test_write_io ...passed 00:06:50.174 Test: test_read_io ...passed 00:06:50.742 Test: test_unmap_io ...passed 00:06:50.742 Test: test_io_failure ...[2024-04-27 00:26:24.226364] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 962:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:50.742 passed 00:06:50.742 Test: test_multi_raid_no_io ...passed 00:06:50.742 Test: test_multi_raid_with_io ...passed 00:06:50.742 Test: test_io_type_supported ...passed 00:06:50.742 Test: test_raid_json_dump_info ...passed 00:06:50.742 Test: test_context_size ...passed 00:06:50.742 Test: test_raid_level_conversions ...passed 00:06:50.742 Test: test_raid_io_split ...passedTest Options 00:06:50.742 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2, g_enable_dif = 0 00:06:50.742 00:06:50.742 Suite: raid_single_run 00:06:50.742 Test: test_raid_process ...passed 00:06:50.742 00:06:50.742 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.742 suites 3 3 n/a 0 0 00:06:50.742 tests 37 37 37 0 0 00:06:50.742 asserts 355354 355354 355354 0 n/a 00:06:50.742 00:06:50.742 Elapsed time = 1.559 seconds 00:06:50.742 00:26:24 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:50.742 00:06:50.742 00:06:50.742 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.742 http://cunit.sourceforge.net/ 00:06:50.742 00:06:50.742 00:06:50.742 Suite: raid_sb 00:06:50.742 Test: test_raid_bdev_write_superblock ...passed 00:06:50.742 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:50.742 Test: test_raid_bdev_parse_superblock ...[2024-04-27 00:26:24.281005] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:50.742 passed 00:06:50.742 Suite: raid_sb_md 00:06:50.742 Test: test_raid_bdev_write_superblock ...passed 00:06:50.742 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:50.742 Test: test_raid_bdev_parse_superblock ...passed 00:06:50.742 Suite: raid_sb_md_interleaved 00:06:50.742 Test: test_raid_bdev_write_superblock ...passed 00:06:50.742 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:50.742 Test: test_raid_bdev_parse_superblock ...[2024-04-27 00:26:24.281484] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:50.742 passed 00:06:50.742 00:06:50.742 [2024-04-27 00:26:24.281764] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 163:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:50.742 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.742 suites 3 3 n/a 0 0 00:06:50.742 tests 9 9 9 0 0 00:06:50.742 asserts 136 136 136 0 n/a 00:06:50.742 00:06:50.743 Elapsed time = 0.002 seconds 00:06:50.743 00:26:24 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:50.743 00:06:50.743 00:06:50.743 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.743 http://cunit.sourceforge.net/ 00:06:50.743 00:06:50.743 00:06:50.743 Suite: concat 00:06:50.743 Test: test_concat_start ...passed 00:06:50.743 Test: test_concat_rw ...passed 00:06:50.743 Test: test_concat_null_payload ...passed 00:06:50.743 00:06:50.743 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.743 suites 1 1 n/a 0 0 00:06:50.743 tests 3 3 3 0 0 00:06:50.743 asserts 8460 8460 8460 0 n/a 00:06:50.743 00:06:50.743 Elapsed time = 0.008 seconds 00:06:51.002 00:26:24 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:51.002 00:06:51.002 00:06:51.002 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.002 http://cunit.sourceforge.net/ 00:06:51.002 00:06:51.002 00:06:51.002 Suite: raid1 00:06:51.002 Test: test_raid1_start ...passed 00:06:51.002 Test: test_raid1_read_balancing ...passed 00:06:51.002 00:06:51.002 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.002 suites 1 1 n/a 0 0 00:06:51.002 tests 2 2 2 0 0 00:06:51.002 asserts 2880 2880 2880 0 n/a 00:06:51.002 00:06:51.002 Elapsed time = 0.004 seconds 00:06:51.002 00:26:24 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:51.002 00:06:51.002 00:06:51.002 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.002 http://cunit.sourceforge.net/ 00:06:51.002 00:06:51.002 00:06:51.002 Suite: zone 00:06:51.002 Test: test_zone_get_operation ...passed 00:06:51.002 Test: test_bdev_zone_get_info ...passed 00:06:51.002 Test: test_bdev_zone_management ...passed 00:06:51.002 Test: test_bdev_zone_append ...passed 00:06:51.002 Test: test_bdev_zone_append_with_md ...passed 00:06:51.002 Test: test_bdev_zone_appendv ...passed 00:06:51.002 Test: test_bdev_zone_appendv_with_md ...passed 00:06:51.002 Test: test_bdev_io_get_append_location ...passed 00:06:51.002 00:06:51.002 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.002 suites 1 1 n/a 0 0 00:06:51.002 tests 8 8 8 0 0 00:06:51.002 asserts 94 94 94 0 n/a 00:06:51.002 00:06:51.002 Elapsed time = 0.001 seconds 00:06:51.002 00:26:24 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:51.002 00:06:51.003 00:06:51.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.003 http://cunit.sourceforge.net/ 00:06:51.003 00:06:51.003 00:06:51.003 Suite: gpt_parse 00:06:51.003 Test: test_parse_mbr_and_primary ...[2024-04-27 00:26:24.418567] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:51.003 [2024-04-27 00:26:24.418864] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:51.003 [2024-04-27 00:26:24.418951] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:51.003 [2024-04-27 00:26:24.419045] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:51.003 [2024-04-27 00:26:24.419098] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:51.003 [2024-04-27 00:26:24.419196] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:51.003 passed 00:06:51.003 Test: test_parse_secondary ...[2024-04-27 00:26:24.420049] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:51.003 [2024-04-27 00:26:24.420120] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:51.003 [2024-04-27 00:26:24.420169] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:51.003 [2024-04-27 00:26:24.420212] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:51.003 passed 00:06:51.003 Test: test_check_mbr ...[2024-04-27 00:26:24.421046] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:51.003 [2024-04-27 00:26:24.421106] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:51.003 passed 00:06:51.003 Test: test_read_header ...[2024-04-27 00:26:24.421177] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:51.003 [2024-04-27 00:26:24.421321] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:51.003 [2024-04-27 00:26:24.421407] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:51.003 [2024-04-27 00:26:24.421457] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:51.003 [2024-04-27 00:26:24.421503] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:51.003 passed 00:06:51.003 Test: test_read_partitions ...[2024-04-27 00:26:24.421553] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:51.003 [2024-04-27 00:26:24.421627] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:51.003 [2024-04-27 00:26:24.421687] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:51.003 [2024-04-27 00:26:24.421748] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:51.003 [2024-04-27 00:26:24.421787] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:51.003 [2024-04-27 00:26:24.422227] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:51.003 passed 00:06:51.003 00:06:51.003 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.003 suites 1 1 n/a 0 0 00:06:51.003 tests 5 5 5 0 0 00:06:51.003 asserts 33 33 33 0 n/a 00:06:51.003 00:06:51.003 Elapsed time = 0.005 seconds 00:06:51.003 00:26:24 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:51.003 00:06:51.003 00:06:51.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.003 http://cunit.sourceforge.net/ 00:06:51.003 00:06:51.003 00:06:51.003 Suite: bdev_part 00:06:51.003 Test: part_test ...[2024-04-27 00:26:24.456343] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4551:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:51.003 passed 00:06:51.003 Test: part_free_test ...passed 00:06:51.003 Test: part_get_io_channel_test ...passed 00:06:51.003 Test: part_construct_ext ...passed 00:06:51.003 00:06:51.003 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.003 suites 1 1 n/a 0 0 00:06:51.003 tests 4 4 4 0 0 00:06:51.003 asserts 48 48 48 0 n/a 00:06:51.003 00:06:51.003 Elapsed time = 0.052 seconds 00:06:51.003 00:26:24 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:51.003 00:06:51.003 00:06:51.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.003 http://cunit.sourceforge.net/ 00:06:51.003 00:06:51.003 00:06:51.003 Suite: scsi_nvme_suite 00:06:51.003 Test: scsi_nvme_translate_test ...passed 00:06:51.003 00:06:51.003 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.003 suites 1 1 n/a 0 0 00:06:51.003 tests 1 1 1 0 0 00:06:51.003 asserts 104 104 104 0 n/a 00:06:51.003 00:06:51.003 Elapsed time = 0.000 seconds 00:06:51.003 00:26:24 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:51.003 00:06:51.003 00:06:51.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.003 http://cunit.sourceforge.net/ 00:06:51.003 00:06:51.003 00:06:51.003 Suite: lvol 00:06:51.003 Test: ut_lvs_init ...[2024-04-27 00:26:24.572708] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:51.003 [2024-04-27 00:26:24.573110] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:51.003 passed 00:06:51.003 Test: ut_lvol_init ...passed 00:06:51.003 Test: ut_lvol_snapshot ...passed 00:06:51.003 Test: ut_lvol_clone ...passed 00:06:51.003 Test: ut_lvs_destroy ...passed 00:06:51.003 Test: ut_lvs_unload ...passed 00:06:51.003 Test: ut_lvol_resize ...[2024-04-27 00:26:24.574639] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:51.003 passed 00:06:51.003 Test: ut_lvol_set_read_only ...passed 00:06:51.003 Test: ut_lvol_hotremove ...passed 00:06:51.003 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:51.003 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:51.003 Test: ut_lvol_read_write ...passed 00:06:51.003 Test: ut_vbdev_lvol_submit_request ...passed 00:06:51.003 Test: ut_lvol_examine_config ...passed 00:06:51.003 Test: ut_lvol_examine_disk ...[2024-04-27 00:26:24.575394] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:51.003 passed 00:06:51.003 Test: ut_lvol_rename ...[2024-04-27 00:26:24.576363] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:51.003 [2024-04-27 00:26:24.576471] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:51.003 passed 00:06:51.003 Test: ut_bdev_finish ...passed 00:06:51.003 Test: ut_lvs_rename ...passed 00:06:51.003 Test: ut_lvol_seek ...passed 00:06:51.003 Test: ut_esnap_dev_create ...[2024-04-27 00:26:24.577124] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:51.003 [2024-04-27 00:26:24.577207] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:51.003 [2024-04-27 00:26:24.577258] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:51.003 [2024-04-27 00:26:24.577323] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:51.003 passed 00:06:51.003 Test: ut_lvol_esnap_clone_bad_args ...[2024-04-27 00:26:24.577490] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:51.003 [2024-04-27 00:26:24.577537] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:51.003 passed 00:06:51.003 00:06:51.003 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.003 suites 1 1 n/a 0 0 00:06:51.003 tests 21 21 21 0 0 00:06:51.003 asserts 758 758 758 0 n/a 00:06:51.003 00:06:51.003 Elapsed time = 0.005 seconds 00:06:51.263 00:26:24 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:51.263 00:06:51.263 00:06:51.263 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.263 http://cunit.sourceforge.net/ 00:06:51.263 00:06:51.263 00:06:51.263 Suite: zone_block 00:06:51.263 Test: test_zone_block_create ...passed 00:06:51.263 Test: test_zone_block_create_invalid ...[2024-04-27 00:26:24.629812] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:51.263 [2024-04-27 00:26:24.630128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-27 00:26:24.630291] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:51.263 [2024-04-27 00:26:24.630404] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-27 00:26:24.630536] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:51.263 [2024-04-27 00:26:24.630579] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-27 00:26:24.630669] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:51.263 [2024-04-27 00:26:24.630723] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:51.263 Test: test_get_zone_info ...[2024-04-27 00:26:24.631195] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.631284] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.631337] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 passed 00:06:51.263 Test: test_supported_io_types ...passed 00:06:51.263 Test: test_reset_zone ...[2024-04-27 00:26:24.632043] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.632139] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 passed 00:06:51.263 Test: test_open_zone ...[2024-04-27 00:26:24.632526] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.633136] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.633233] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 passed 00:06:51.263 Test: test_zone_write ...[2024-04-27 00:26:24.633657] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:51.263 [2024-04-27 00:26:24.633740] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.633799] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:51.263 [2024-04-27 00:26:24.633862] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.638649] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:51.263 [2024-04-27 00:26:24.638697] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.638798] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:51.263 [2024-04-27 00:26:24.638831] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.263 [2024-04-27 00:26:24.643765] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:51.264 [2024-04-27 00:26:24.643838] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 passed 00:06:51.264 Test: test_zone_read ...[2024-04-27 00:26:24.644252] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:51.264 [2024-04-27 00:26:24.644303] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.644366] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:51.264 [2024-04-27 00:26:24.644401] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.644784] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:51.264 [2024-04-27 00:26:24.644854] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 passed 00:06:51.264 Test: test_close_zone ...[2024-04-27 00:26:24.645212] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.645312] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.645491] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 passed 00:06:51.264 Test: test_finish_zone ...[2024-04-27 00:26:24.645548] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.646122] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.646229] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 passed 00:06:51.264 Test: test_append_zone ...[2024-04-27 00:26:24.646587] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:51.264 [2024-04-27 00:26:24.646637] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.646706] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:51.264 [2024-04-27 00:26:24.646734] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 [2024-04-27 00:26:24.657133] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:51.264 [2024-04-27 00:26:24.657227] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:51.264 passed 00:06:51.264 00:06:51.264 Run Summary: Type Total Ran Passed Failed Inactive 00:06:51.264 suites 1 1 n/a 0 0 00:06:51.264 tests 11 11 11 0 0 00:06:51.264 asserts 3437 3437 3437 0 n/a 00:06:51.264 00:06:51.264 Elapsed time = 0.029 seconds 00:06:51.264 00:26:24 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:51.264 00:06:51.264 00:06:51.264 CUnit - A unit testing framework for C - Version 2.1-3 00:06:51.264 http://cunit.sourceforge.net/ 00:06:51.264 00:06:51.264 00:06:51.264 Suite: bdev 00:06:51.264 Test: basic ...[2024-04-27 00:26:24.756170] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55c89445a3e1): Operation not permitted (rc=-1) 00:06:51.264 [2024-04-27 00:26:24.756625] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55c89445a3a0): Operation not permitted (rc=-1) 00:06:51.264 [2024-04-27 00:26:24.756938] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55c89445a3e1): Operation not permitted (rc=-1) 00:06:51.264 passed 00:06:51.264 Test: unregister_and_close ...passed 00:06:51.523 Test: unregister_and_close_different_threads ...passed 00:06:51.523 Test: basic_qos ...passed 00:06:51.523 Test: put_channel_during_reset ...passed 00:06:51.523 Test: aborted_reset ...passed 00:06:51.523 Test: aborted_reset_no_outstanding_io ...passed 00:06:51.523 Test: io_during_reset ...passed 00:06:51.523 Test: reset_completions ...passed 00:06:51.782 Test: io_during_qos_queue ...passed 00:06:51.782 Test: io_during_qos_reset ...passed 00:06:51.782 Test: enomem ...passed 00:06:51.782 Test: enomem_multi_bdev ...passed 00:06:51.782 Test: enomem_multi_bdev_unregister ...passed 00:06:51.782 Test: enomem_multi_io_target ...passed 00:06:51.782 Test: qos_dynamic_enable ...passed 00:06:52.041 Test: bdev_histograms_mt ...passed 00:06:52.041 Test: bdev_set_io_timeout_mt ...[2024-04-27 00:26:25.421191] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:52.041 passed 00:06:52.041 Test: lock_lba_range_then_submit_io ...[2024-04-27 00:26:25.437438] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55c89445a360 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:52.041 passed 00:06:52.041 Test: unregister_during_reset ...passed 00:06:52.041 Test: event_notify_and_close ...passed 00:06:52.041 Suite: bdev_wrong_thread 00:06:52.041 Test: spdk_bdev_register_wt ...[2024-04-27 00:26:25.529126] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8429:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:06:52.041 passed 00:06:52.041 Test: spdk_bdev_examine_wt ...[2024-04-27 00:26:25.529681] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 792:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:06:52.041 passed 00:06:52.041 00:06:52.041 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.041 suites 2 2 n/a 0 0 00:06:52.041 tests 23 23 23 0 0 00:06:52.041 asserts 601 601 601 0 n/a 00:06:52.041 00:06:52.041 Elapsed time = 0.805 seconds 00:06:52.041 00:06:52.041 real 0m4.389s 00:06:52.041 user 0m1.864s 00:06:52.041 sys 0m2.528s 00:06:52.041 00:26:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.041 00:26:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.041 ************************************ 00:06:52.041 END TEST unittest_bdev 00:06:52.041 ************************************ 00:06:52.041 00:26:25 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:52.041 00:26:25 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:52.041 00:26:25 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:52.041 00:26:25 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:52.041 00:26:25 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:52.041 00:26:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.041 00:26:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.041 00:26:25 -- common/autotest_common.sh@10 -- # set +x 00:06:52.300 ************************************ 00:06:52.300 START TEST unittest_bdev_raid5f 00:06:52.300 ************************************ 00:06:52.300 00:26:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:52.300 00:06:52.300 00:06:52.300 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.300 http://cunit.sourceforge.net/ 00:06:52.300 00:06:52.300 00:06:52.300 Suite: raid5f 00:06:52.300 Test: test_raid5f_start ...passed 00:06:52.558 Test: test_raid5f_submit_read_request ...passed 00:06:52.817 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:56.105 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:10.984 Test: test_raid5f_chunk_write_error ...passed 00:07:19.141 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:21.673 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:53.745 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:53.745 00:07:53.745 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.745 suites 1 1 n/a 0 0 00:07:53.745 tests 8 8 8 0 0 00:07:53.745 asserts 352392 352392 352392 0 n/a 00:07:53.745 00:07:53.745 Elapsed time = 56.373 seconds 00:07:53.745 00:07:53.745 real 0m56.489s 00:07:53.745 user 0m53.602s 00:07:53.745 sys 0m2.846s 00:07:53.745 00:27:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.745 00:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:53.745 ************************************ 00:07:53.745 END TEST unittest_bdev_raid5f 00:07:53.745 ************************************ 00:07:53.745 00:27:22 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:53.745 00:27:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.745 00:27:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.745 00:27:22 -- common/autotest_common.sh@10 -- # set +x 00:07:53.745 ************************************ 00:07:53.745 START TEST unittest_blob_blobfs 00:07:53.745 ************************************ 00:07:53.745 00:27:22 -- common/autotest_common.sh@1111 -- # unittest_blob 00:07:53.745 00:27:22 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:53.745 00:27:22 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:53.745 00:07:53.745 00:07:53.745 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.745 http://cunit.sourceforge.net/ 00:07:53.745 00:07:53.745 00:07:53.745 Suite: blob_nocopy_noextent 00:07:53.745 Test: blob_init ...[2024-04-27 00:27:22.260656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:53.745 passed 00:07:53.745 Test: blob_thin_provision ...passed 00:07:53.745 Test: blob_read_only ...passed 00:07:53.745 Test: bs_load ...[2024-04-27 00:27:22.366003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:53.745 passed 00:07:53.745 Test: bs_load_custom_cluster_size ...passed 00:07:53.745 Test: bs_load_after_failed_grow ...passed 00:07:53.745 Test: bs_cluster_sz ...[2024-04-27 00:27:22.403084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:53.745 [2024-04-27 00:27:22.403601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:53.745 [2024-04-27 00:27:22.403824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:53.745 passed 00:07:53.745 Test: bs_resize_md ...passed 00:07:53.745 Test: bs_destroy ...passed 00:07:53.745 Test: bs_type ...passed 00:07:53.745 Test: bs_super_block ...passed 00:07:53.745 Test: bs_test_recover_cluster_count ...passed 00:07:53.745 Test: bs_grow_live ...passed 00:07:53.745 Test: bs_grow_live_no_space ...passed 00:07:53.745 Test: bs_test_grow ...passed 00:07:53.745 Test: blob_serialize_test ...passed 00:07:53.745 Test: super_block_crc ...passed 00:07:53.745 Test: blob_thin_prov_write_count_io ...passed 00:07:53.745 Test: blob_thin_prov_unmap_cluster ...passed 00:07:53.745 Test: bs_load_iter_test ...passed 00:07:53.745 Test: blob_relations ...[2024-04-27 00:27:22.627833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.745 [2024-04-27 00:27:22.627970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.628937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.745 [2024-04-27 00:27:22.629030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 passed 00:07:53.745 Test: blob_relations2 ...[2024-04-27 00:27:22.645497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.745 [2024-04-27 00:27:22.645615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.645651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.745 [2024-04-27 00:27:22.645719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.647260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.745 [2024-04-27 00:27:22.647347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.647767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.745 [2024-04-27 00:27:22.647830] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 passed 00:07:53.745 Test: blob_relations3 ...passed 00:07:53.745 Test: blobstore_clean_power_failure ...passed 00:07:53.745 Test: blob_delete_snapshot_power_failure ...[2024-04-27 00:27:22.832685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:53.745 [2024-04-27 00:27:22.847287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.745 [2024-04-27 00:27:22.847405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.745 [2024-04-27 00:27:22.847445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.862468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:53.745 [2024-04-27 00:27:22.862582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.745 [2024-04-27 00:27:22.862614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.745 [2024-04-27 00:27:22.862682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.878036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:53.745 [2024-04-27 00:27:22.878164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.893166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:53.745 [2024-04-27 00:27:22.893345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 [2024-04-27 00:27:22.908417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:53.745 [2024-04-27 00:27:22.908558] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.745 passed 00:07:53.745 Test: blob_create_snapshot_power_failure ...[2024-04-27 00:27:22.952312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.745 [2024-04-27 00:27:22.980173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:53.745 [2024-04-27 00:27:22.994582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:53.745 passed 00:07:53.745 Test: blob_io_unit ...passed 00:07:53.746 Test: blob_io_unit_compatibility ...passed 00:07:53.746 Test: blob_ext_md_pages ...passed 00:07:53.746 Test: blob_esnap_io_4096_4096 ...passed 00:07:53.746 Test: blob_esnap_io_512_512 ...passed 00:07:53.746 Test: blob_esnap_io_4096_512 ...passed 00:07:53.746 Test: blob_esnap_io_512_4096 ...passed 00:07:53.746 Suite: blob_bs_nocopy_noextent 00:07:53.746 Test: blob_open ...passed 00:07:53.746 Test: blob_create ...[2024-04-27 00:27:23.279814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:53.746 passed 00:07:53.746 Test: blob_create_loop ...passed 00:07:53.746 Test: blob_create_fail ...[2024-04-27 00:27:23.392128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.746 passed 00:07:53.746 Test: blob_create_internal ...passed 00:07:53.746 Test: blob_create_zero_extent ...passed 00:07:53.746 Test: blob_snapshot ...passed 00:07:53.746 Test: blob_clone ...passed 00:07:53.746 Test: blob_inflate ...[2024-04-27 00:27:23.603219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:53.746 passed 00:07:53.746 Test: blob_delete ...passed 00:07:53.746 Test: blob_resize_test ...[2024-04-27 00:27:23.679482] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:53.746 passed 00:07:53.746 Test: channel_ops ...passed 00:07:53.746 Test: blob_super ...passed 00:07:53.746 Test: blob_rw_verify_iov ...passed 00:07:53.746 Test: blob_unmap ...passed 00:07:53.746 Test: blob_iter ...passed 00:07:53.746 Test: blob_parse_md ...passed 00:07:53.746 Test: bs_load_pending_removal ...passed 00:07:53.746 Test: bs_unload ...[2024-04-27 00:27:23.994567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:53.746 passed 00:07:53.746 Test: bs_usable_clusters ...passed 00:07:53.746 Test: blob_crc ...[2024-04-27 00:27:24.072608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:53.746 [2024-04-27 00:27:24.072780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:53.746 passed 00:07:53.746 Test: blob_flags ...passed 00:07:53.746 Test: bs_version ...passed 00:07:53.746 Test: blob_set_xattrs_test ...[2024-04-27 00:27:24.195148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.746 [2024-04-27 00:27:24.195290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.746 passed 00:07:53.746 Test: blob_thin_prov_alloc ...passed 00:07:53.746 Test: blob_insert_cluster_msg_test ...passed 00:07:53.746 Test: blob_thin_prov_rw ...passed 00:07:53.746 Test: blob_thin_prov_rle ...passed 00:07:53.746 Test: blob_thin_prov_rw_iov ...passed 00:07:53.746 Test: blob_snapshot_rw ...passed 00:07:53.746 Test: blob_snapshot_rw_iov ...passed 00:07:53.746 Test: blob_inflate_rw ...passed 00:07:53.746 Test: blob_snapshot_freeze_io ...passed 00:07:53.746 Test: blob_operation_split_rw ...passed 00:07:53.746 Test: blob_operation_split_rw_iov ...passed 00:07:53.746 Test: blob_simultaneous_operations ...[2024-04-27 00:27:25.232129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:53.746 [2024-04-27 00:27:25.232274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:25.233518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:53.746 [2024-04-27 00:27:25.233573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:25.245381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:53.746 [2024-04-27 00:27:25.245474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:25.245610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:53.746 [2024-04-27 00:27:25.245646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 passed 00:07:53.746 Test: blob_persist_test ...passed 00:07:53.746 Test: blob_decouple_snapshot ...passed 00:07:53.746 Test: blob_seek_io_unit ...passed 00:07:53.746 Test: blob_nested_freezes ...passed 00:07:53.746 Suite: blob_blob_nocopy_noextent 00:07:53.746 Test: blob_write ...passed 00:07:53.746 Test: blob_read ...passed 00:07:53.746 Test: blob_rw_verify ...passed 00:07:53.746 Test: blob_rw_verify_iov_nomem ...passed 00:07:53.746 Test: blob_rw_iov_read_only ...passed 00:07:53.746 Test: blob_xattr ...passed 00:07:53.746 Test: blob_dirty_shutdown ...passed 00:07:53.746 Test: blob_is_degraded ...passed 00:07:53.746 Suite: blob_esnap_bs_nocopy_noextent 00:07:53.746 Test: blob_esnap_create ...passed 00:07:53.746 Test: blob_esnap_thread_add_remove ...passed 00:07:53.746 Test: blob_esnap_clone_snapshot ...passed 00:07:53.746 Test: blob_esnap_clone_inflate ...passed 00:07:53.746 Test: blob_esnap_clone_decouple ...passed 00:07:53.746 Test: blob_esnap_clone_reload ...passed 00:07:53.746 Test: blob_esnap_hotplug ...passed 00:07:53.746 Suite: blob_nocopy_extent 00:07:53.746 Test: blob_init ...[2024-04-27 00:27:26.085975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:53.746 passed 00:07:53.746 Test: blob_thin_provision ...passed 00:07:53.746 Test: blob_read_only ...passed 00:07:53.746 Test: bs_load ...[2024-04-27 00:27:26.140775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:53.746 passed 00:07:53.746 Test: bs_load_custom_cluster_size ...passed 00:07:53.746 Test: bs_load_after_failed_grow ...passed 00:07:53.746 Test: bs_cluster_sz ...[2024-04-27 00:27:26.171638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:53.746 [2024-04-27 00:27:26.171941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:53.746 [2024-04-27 00:27:26.172039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:53.746 passed 00:07:53.746 Test: bs_resize_md ...passed 00:07:53.746 Test: bs_destroy ...passed 00:07:53.746 Test: bs_type ...passed 00:07:53.746 Test: bs_super_block ...passed 00:07:53.746 Test: bs_test_recover_cluster_count ...passed 00:07:53.746 Test: bs_grow_live ...passed 00:07:53.746 Test: bs_grow_live_no_space ...passed 00:07:53.746 Test: bs_test_grow ...passed 00:07:53.746 Test: blob_serialize_test ...passed 00:07:53.746 Test: super_block_crc ...passed 00:07:53.746 Test: blob_thin_prov_write_count_io ...passed 00:07:53.746 Test: blob_thin_prov_unmap_cluster ...passed 00:07:53.746 Test: bs_load_iter_test ...passed 00:07:53.746 Test: blob_relations ...[2024-04-27 00:27:26.381050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.746 [2024-04-27 00:27:26.381203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.382277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.746 [2024-04-27 00:27:26.382347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 passed 00:07:53.746 Test: blob_relations2 ...[2024-04-27 00:27:26.398398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.746 [2024-04-27 00:27:26.398513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.398569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.746 [2024-04-27 00:27:26.398596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.400133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.746 [2024-04-27 00:27:26.400214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.400641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:53.746 [2024-04-27 00:27:26.400704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 passed 00:07:53.746 Test: blob_relations3 ...passed 00:07:53.746 Test: blobstore_clean_power_failure ...passed 00:07:53.746 Test: blob_delete_snapshot_power_failure ...[2024-04-27 00:27:26.590788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.746 [2024-04-27 00:27:26.605210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.746 [2024-04-27 00:27:26.619848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.746 [2024-04-27 00:27:26.619942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.746 [2024-04-27 00:27:26.619985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.634598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.746 [2024-04-27 00:27:26.634700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.746 [2024-04-27 00:27:26.634741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.746 [2024-04-27 00:27:26.634772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.649384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.746 [2024-04-27 00:27:26.649488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:53.746 [2024-04-27 00:27:26.649527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:53.746 [2024-04-27 00:27:26.649564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.664388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:53.746 [2024-04-27 00:27:26.664524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.679776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:53.746 [2024-04-27 00:27:26.679929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 [2024-04-27 00:27:26.694828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:53.746 [2024-04-27 00:27:26.694944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:53.746 passed 00:07:53.746 Test: blob_create_snapshot_power_failure ...[2024-04-27 00:27:26.739483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:53.746 [2024-04-27 00:27:26.754294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:53.746 [2024-04-27 00:27:26.783784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:53.746 [2024-04-27 00:27:26.798999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:53.746 passed 00:07:53.746 Test: blob_io_unit ...passed 00:07:53.746 Test: blob_io_unit_compatibility ...passed 00:07:53.746 Test: blob_ext_md_pages ...passed 00:07:53.746 Test: blob_esnap_io_4096_4096 ...passed 00:07:53.746 Test: blob_esnap_io_512_512 ...passed 00:07:53.746 Test: blob_esnap_io_4096_512 ...passed 00:07:53.746 Test: blob_esnap_io_512_4096 ...passed 00:07:53.746 Suite: blob_bs_nocopy_extent 00:07:53.746 Test: blob_open ...passed 00:07:53.746 Test: blob_create ...[2024-04-27 00:27:27.100913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:53.746 passed 00:07:53.746 Test: blob_create_loop ...passed 00:07:53.746 Test: blob_create_fail ...[2024-04-27 00:27:27.234837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:53.746 passed 00:07:53.746 Test: blob_create_internal ...passed 00:07:53.746 Test: blob_create_zero_extent ...passed 00:07:54.005 Test: blob_snapshot ...passed 00:07:54.005 Test: blob_clone ...passed 00:07:54.005 Test: blob_inflate ...[2024-04-27 00:27:27.452447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:54.005 passed 00:07:54.005 Test: blob_delete ...passed 00:07:54.005 Test: blob_resize_test ...[2024-04-27 00:27:27.529865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:54.005 passed 00:07:54.005 Test: channel_ops ...passed 00:07:54.264 Test: blob_super ...passed 00:07:54.264 Test: blob_rw_verify_iov ...passed 00:07:54.264 Test: blob_unmap ...passed 00:07:54.264 Test: blob_iter ...passed 00:07:54.264 Test: blob_parse_md ...passed 00:07:54.264 Test: bs_load_pending_removal ...passed 00:07:54.264 Test: bs_unload ...[2024-04-27 00:27:27.845620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:54.523 passed 00:07:54.523 Test: bs_usable_clusters ...passed 00:07:54.523 Test: blob_crc ...[2024-04-27 00:27:27.924216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:54.523 [2024-04-27 00:27:27.924422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:54.523 passed 00:07:54.523 Test: blob_flags ...passed 00:07:54.523 Test: bs_version ...passed 00:07:54.524 Test: blob_set_xattrs_test ...[2024-04-27 00:27:28.049189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.524 [2024-04-27 00:27:28.049349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:54.524 passed 00:07:54.783 Test: blob_thin_prov_alloc ...passed 00:07:54.783 Test: blob_insert_cluster_msg_test ...passed 00:07:54.783 Test: blob_thin_prov_rw ...passed 00:07:54.783 Test: blob_thin_prov_rle ...passed 00:07:54.783 Test: blob_thin_prov_rw_iov ...passed 00:07:55.042 Test: blob_snapshot_rw ...passed 00:07:55.042 Test: blob_snapshot_rw_iov ...passed 00:07:55.314 Test: blob_inflate_rw ...passed 00:07:55.314 Test: blob_snapshot_freeze_io ...passed 00:07:55.588 Test: blob_operation_split_rw ...passed 00:07:55.588 Test: blob_operation_split_rw_iov ...passed 00:07:55.588 Test: blob_simultaneous_operations ...[2024-04-27 00:27:29.123876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.588 [2024-04-27 00:27:29.124023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.588 [2024-04-27 00:27:29.125368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.588 [2024-04-27 00:27:29.125434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.588 [2024-04-27 00:27:29.138522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.588 [2024-04-27 00:27:29.138589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.588 [2024-04-27 00:27:29.138785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:55.588 [2024-04-27 00:27:29.138834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.588 passed 00:07:55.848 Test: blob_persist_test ...passed 00:07:55.848 Test: blob_decouple_snapshot ...passed 00:07:55.848 Test: blob_seek_io_unit ...passed 00:07:55.848 Test: blob_nested_freezes ...passed 00:07:55.848 Suite: blob_blob_nocopy_extent 00:07:55.848 Test: blob_write ...passed 00:07:56.107 Test: blob_read ...passed 00:07:56.107 Test: blob_rw_verify ...passed 00:07:56.107 Test: blob_rw_verify_iov_nomem ...passed 00:07:56.107 Test: blob_rw_iov_read_only ...passed 00:07:56.107 Test: blob_xattr ...passed 00:07:56.107 Test: blob_dirty_shutdown ...passed 00:07:56.107 Test: blob_is_degraded ...passed 00:07:56.107 Suite: blob_esnap_bs_nocopy_extent 00:07:56.366 Test: blob_esnap_create ...passed 00:07:56.366 Test: blob_esnap_thread_add_remove ...passed 00:07:56.366 Test: blob_esnap_clone_snapshot ...passed 00:07:56.366 Test: blob_esnap_clone_inflate ...passed 00:07:56.366 Test: blob_esnap_clone_decouple ...passed 00:07:56.366 Test: blob_esnap_clone_reload ...passed 00:07:56.624 Test: blob_esnap_hotplug ...passed 00:07:56.624 Suite: blob_copy_noextent 00:07:56.624 Test: blob_init ...[2024-04-27 00:27:29.965311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:56.624 passed 00:07:56.624 Test: blob_thin_provision ...passed 00:07:56.624 Test: blob_read_only ...passed 00:07:56.624 Test: bs_load ...[2024-04-27 00:27:30.019263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:56.624 passed 00:07:56.624 Test: bs_load_custom_cluster_size ...passed 00:07:56.624 Test: bs_load_after_failed_grow ...passed 00:07:56.624 Test: bs_cluster_sz ...[2024-04-27 00:27:30.047436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:56.624 [2024-04-27 00:27:30.047652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:56.624 [2024-04-27 00:27:30.047701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:56.624 passed 00:07:56.624 Test: bs_resize_md ...passed 00:07:56.624 Test: bs_destroy ...passed 00:07:56.624 Test: bs_type ...passed 00:07:56.624 Test: bs_super_block ...passed 00:07:56.624 Test: bs_test_recover_cluster_count ...passed 00:07:56.624 Test: bs_grow_live ...passed 00:07:56.624 Test: bs_grow_live_no_space ...passed 00:07:56.624 Test: bs_test_grow ...passed 00:07:56.624 Test: blob_serialize_test ...passed 00:07:56.624 Test: super_block_crc ...passed 00:07:56.624 Test: blob_thin_prov_write_count_io ...passed 00:07:56.883 Test: blob_thin_prov_unmap_cluster ...passed 00:07:56.883 Test: bs_load_iter_test ...passed 00:07:56.883 Test: blob_relations ...[2024-04-27 00:27:30.264460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.883 [2024-04-27 00:27:30.264661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.883 [2024-04-27 00:27:30.265238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.883 [2024-04-27 00:27:30.265279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.883 passed 00:07:56.883 Test: blob_relations2 ...[2024-04-27 00:27:30.280990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.883 [2024-04-27 00:27:30.281140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.883 [2024-04-27 00:27:30.281169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.883 [2024-04-27 00:27:30.281186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.883 [2024-04-27 00:27:30.282178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.883 [2024-04-27 00:27:30.282262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.883 [2024-04-27 00:27:30.282605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:56.883 [2024-04-27 00:27:30.282655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.883 passed 00:07:56.883 Test: blob_relations3 ...passed 00:07:56.883 Test: blobstore_clean_power_failure ...passed 00:07:56.883 Test: blob_delete_snapshot_power_failure ...[2024-04-27 00:27:30.467325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:57.143 [2024-04-27 00:27:30.485582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:57.143 [2024-04-27 00:27:30.485691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:57.143 [2024-04-27 00:27:30.485720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.143 [2024-04-27 00:27:30.500038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:57.143 [2024-04-27 00:27:30.500124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:57.143 [2024-04-27 00:27:30.500147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:57.143 [2024-04-27 00:27:30.500172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.143 [2024-04-27 00:27:30.514293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:57.143 [2024-04-27 00:27:30.514422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.143 [2024-04-27 00:27:30.529248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:57.143 [2024-04-27 00:27:30.529399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.143 [2024-04-27 00:27:30.544327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:57.143 [2024-04-27 00:27:30.544476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:57.143 passed 00:07:57.143 Test: blob_create_snapshot_power_failure ...[2024-04-27 00:27:30.587103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:57.143 [2024-04-27 00:27:30.615387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:57.143 [2024-04-27 00:27:30.629851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:57.143 passed 00:07:57.143 Test: blob_io_unit ...passed 00:07:57.143 Test: blob_io_unit_compatibility ...passed 00:07:57.143 Test: blob_ext_md_pages ...passed 00:07:57.403 Test: blob_esnap_io_4096_4096 ...passed 00:07:57.403 Test: blob_esnap_io_512_512 ...passed 00:07:57.403 Test: blob_esnap_io_4096_512 ...passed 00:07:57.403 Test: blob_esnap_io_512_4096 ...passed 00:07:57.403 Suite: blob_bs_copy_noextent 00:07:57.403 Test: blob_open ...passed 00:07:57.403 Test: blob_create ...[2024-04-27 00:27:30.916358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:57.403 passed 00:07:57.662 Test: blob_create_loop ...passed 00:07:57.662 Test: blob_create_fail ...[2024-04-27 00:27:31.021163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:57.662 passed 00:07:57.662 Test: blob_create_internal ...passed 00:07:57.662 Test: blob_create_zero_extent ...passed 00:07:57.662 Test: blob_snapshot ...passed 00:07:57.662 Test: blob_clone ...passed 00:07:57.662 Test: blob_inflate ...[2024-04-27 00:27:31.224177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:57.662 passed 00:07:57.921 Test: blob_delete ...passed 00:07:57.921 Test: blob_resize_test ...[2024-04-27 00:27:31.299287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:57.921 passed 00:07:57.921 Test: channel_ops ...passed 00:07:57.921 Test: blob_super ...passed 00:07:57.921 Test: blob_rw_verify_iov ...passed 00:07:57.921 Test: blob_unmap ...passed 00:07:57.921 Test: blob_iter ...passed 00:07:58.180 Test: blob_parse_md ...passed 00:07:58.180 Test: bs_load_pending_removal ...passed 00:07:58.180 Test: bs_unload ...[2024-04-27 00:27:31.603548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:58.180 passed 00:07:58.180 Test: bs_usable_clusters ...passed 00:07:58.180 Test: blob_crc ...[2024-04-27 00:27:31.680213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:58.180 [2024-04-27 00:27:31.680376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:58.180 passed 00:07:58.180 Test: blob_flags ...passed 00:07:58.438 Test: bs_version ...passed 00:07:58.438 Test: blob_set_xattrs_test ...[2024-04-27 00:27:31.795010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:58.438 [2024-04-27 00:27:31.795197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:58.438 passed 00:07:58.438 Test: blob_thin_prov_alloc ...passed 00:07:58.438 Test: blob_insert_cluster_msg_test ...passed 00:07:58.697 Test: blob_thin_prov_rw ...passed 00:07:58.697 Test: blob_thin_prov_rle ...passed 00:07:58.697 Test: blob_thin_prov_rw_iov ...passed 00:07:58.697 Test: blob_snapshot_rw ...passed 00:07:58.697 Test: blob_snapshot_rw_iov ...passed 00:07:58.955 Test: blob_inflate_rw ...passed 00:07:58.955 Test: blob_snapshot_freeze_io ...passed 00:07:59.214 Test: blob_operation_split_rw ...passed 00:07:59.214 Test: blob_operation_split_rw_iov ...passed 00:07:59.214 Test: blob_simultaneous_operations ...[2024-04-27 00:27:32.787523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.214 [2024-04-27 00:27:32.787666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.214 [2024-04-27 00:27:32.788174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.214 [2024-04-27 00:27:32.788214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.214 [2024-04-27 00:27:32.791073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.214 [2024-04-27 00:27:32.791132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.214 [2024-04-27 00:27:32.791241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.214 [2024-04-27 00:27:32.791263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.472 passed 00:07:59.472 Test: blob_persist_test ...passed 00:07:59.472 Test: blob_decouple_snapshot ...passed 00:07:59.472 Test: blob_seek_io_unit ...passed 00:07:59.472 Test: blob_nested_freezes ...passed 00:07:59.472 Suite: blob_blob_copy_noextent 00:07:59.472 Test: blob_write ...passed 00:07:59.472 Test: blob_read ...passed 00:07:59.730 Test: blob_rw_verify ...passed 00:07:59.730 Test: blob_rw_verify_iov_nomem ...passed 00:07:59.730 Test: blob_rw_iov_read_only ...passed 00:07:59.730 Test: blob_xattr ...passed 00:07:59.730 Test: blob_dirty_shutdown ...passed 00:07:59.730 Test: blob_is_degraded ...passed 00:07:59.730 Suite: blob_esnap_bs_copy_noextent 00:07:59.989 Test: blob_esnap_create ...passed 00:07:59.989 Test: blob_esnap_thread_add_remove ...passed 00:07:59.989 Test: blob_esnap_clone_snapshot ...passed 00:07:59.989 Test: blob_esnap_clone_inflate ...passed 00:07:59.989 Test: blob_esnap_clone_decouple ...passed 00:07:59.989 Test: blob_esnap_clone_reload ...passed 00:07:59.989 Test: blob_esnap_hotplug ...passed 00:07:59.989 Suite: blob_copy_extent 00:07:59.989 Test: blob_init ...[2024-04-27 00:27:33.570016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5404:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:00.248 passed 00:08:00.248 Test: blob_thin_provision ...passed 00:08:00.248 Test: blob_read_only ...passed 00:08:00.248 Test: bs_load ...[2024-04-27 00:27:33.625556] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 898:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:00.248 passed 00:08:00.248 Test: bs_load_custom_cluster_size ...passed 00:08:00.248 Test: bs_load_after_failed_grow ...passed 00:08:00.248 Test: bs_cluster_sz ...[2024-04-27 00:27:33.652872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3740:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:00.248 [2024-04-27 00:27:33.653104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5535:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:00.248 [2024-04-27 00:27:33.653148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3799:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:00.248 passed 00:08:00.248 Test: bs_resize_md ...passed 00:08:00.248 Test: bs_destroy ...passed 00:08:00.248 Test: bs_type ...passed 00:08:00.248 Test: bs_super_block ...passed 00:08:00.248 Test: bs_test_recover_cluster_count ...passed 00:08:00.248 Test: bs_grow_live ...passed 00:08:00.248 Test: bs_grow_live_no_space ...passed 00:08:00.248 Test: bs_test_grow ...passed 00:08:00.248 Test: blob_serialize_test ...passed 00:08:00.248 Test: super_block_crc ...passed 00:08:00.248 Test: blob_thin_prov_write_count_io ...passed 00:08:00.248 Test: blob_thin_prov_unmap_cluster ...passed 00:08:00.248 Test: bs_load_iter_test ...passed 00:08:00.508 Test: blob_relations ...[2024-04-27 00:27:33.844655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.509 [2024-04-27 00:27:33.844770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.509 [2024-04-27 00:27:33.845436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.509 [2024-04-27 00:27:33.845484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.509 passed 00:08:00.509 Test: blob_relations2 ...[2024-04-27 00:27:33.860709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.509 [2024-04-27 00:27:33.860822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.509 [2024-04-27 00:27:33.860869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.509 [2024-04-27 00:27:33.860886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.509 [2024-04-27 00:27:33.861894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.509 [2024-04-27 00:27:33.861944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.509 [2024-04-27 00:27:33.862257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7644:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.509 [2024-04-27 00:27:33.862308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.509 passed 00:08:00.509 Test: blob_relations3 ...passed 00:08:00.509 Test: blobstore_clean_power_failure ...passed 00:08:00.509 Test: blob_delete_snapshot_power_failure ...[2024-04-27 00:27:34.042149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:00.509 [2024-04-27 00:27:34.055969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:00.509 [2024-04-27 00:27:34.069595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:00.509 [2024-04-27 00:27:34.069722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:00.509 [2024-04-27 00:27:34.069754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.509 [2024-04-27 00:27:34.083372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:00.509 [2024-04-27 00:27:34.083439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:00.509 [2024-04-27 00:27:34.083470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:00.509 [2024-04-27 00:27:34.083500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.770 [2024-04-27 00:27:34.097153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:00.770 [2024-04-27 00:27:34.097224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1399:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:00.770 [2024-04-27 00:27:34.097256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:00.770 [2024-04-27 00:27:34.097280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.770 [2024-04-27 00:27:34.110891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7488:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:00.770 [2024-04-27 00:27:34.111000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.770 [2024-04-27 00:27:34.124667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:00.770 [2024-04-27 00:27:34.124803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.770 [2024-04-27 00:27:34.138774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7304:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:00.770 [2024-04-27 00:27:34.138871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.770 passed 00:08:00.770 Test: blob_create_snapshot_power_failure ...[2024-04-27 00:27:34.179784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:00.770 [2024-04-27 00:27:34.192932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1512:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:00.770 [2024-04-27 00:27:34.219470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1602:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:00.770 [2024-04-27 00:27:34.232646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6352:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:00.770 passed 00:08:00.770 Test: blob_io_unit ...passed 00:08:00.770 Test: blob_io_unit_compatibility ...passed 00:08:00.770 Test: blob_ext_md_pages ...passed 00:08:00.770 Test: blob_esnap_io_4096_4096 ...passed 00:08:01.029 Test: blob_esnap_io_512_512 ...passed 00:08:01.029 Test: blob_esnap_io_4096_512 ...passed 00:08:01.029 Test: blob_esnap_io_512_4096 ...passed 00:08:01.029 Suite: blob_bs_copy_extent 00:08:01.029 Test: blob_open ...passed 00:08:01.029 Test: blob_create ...[2024-04-27 00:27:34.506591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:01.029 passed 00:08:01.029 Test: blob_create_loop ...passed 00:08:01.029 Test: blob_create_fail ...[2024-04-27 00:27:34.614403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:01.288 passed 00:08:01.288 Test: blob_create_internal ...passed 00:08:01.288 Test: blob_create_zero_extent ...passed 00:08:01.288 Test: blob_snapshot ...passed 00:08:01.288 Test: blob_clone ...passed 00:08:01.288 Test: blob_inflate ...[2024-04-27 00:27:34.810199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7010:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:01.288 passed 00:08:01.288 Test: blob_delete ...passed 00:08:01.547 Test: blob_resize_test ...[2024-04-27 00:27:34.882577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:01.547 passed 00:08:01.547 Test: channel_ops ...passed 00:08:01.547 Test: blob_super ...passed 00:08:01.547 Test: blob_rw_verify_iov ...passed 00:08:01.547 Test: blob_unmap ...passed 00:08:01.547 Test: blob_iter ...passed 00:08:01.547 Test: blob_parse_md ...passed 00:08:01.806 Test: bs_load_pending_removal ...passed 00:08:01.806 Test: bs_unload ...[2024-04-27 00:27:35.185473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5792:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:01.806 passed 00:08:01.806 Test: bs_usable_clusters ...passed 00:08:01.806 Test: blob_crc ...[2024-04-27 00:27:35.264501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:01.806 [2024-04-27 00:27:35.264649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1611:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:01.806 passed 00:08:01.806 Test: blob_flags ...passed 00:08:01.806 Test: bs_version ...passed 00:08:01.806 Test: blob_set_xattrs_test ...[2024-04-27 00:27:35.386243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:01.806 [2024-04-27 00:27:35.386409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6233:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:02.065 passed 00:08:02.065 Test: blob_thin_prov_alloc ...passed 00:08:02.065 Test: blob_insert_cluster_msg_test ...passed 00:08:02.065 Test: blob_thin_prov_rw ...passed 00:08:02.065 Test: blob_thin_prov_rle ...passed 00:08:02.324 Test: blob_thin_prov_rw_iov ...passed 00:08:02.324 Test: blob_snapshot_rw ...passed 00:08:02.324 Test: blob_snapshot_rw_iov ...passed 00:08:02.584 Test: blob_inflate_rw ...passed 00:08:02.584 Test: blob_snapshot_freeze_io ...passed 00:08:02.842 Test: blob_operation_split_rw ...passed 00:08:02.842 Test: blob_operation_split_rw_iov ...passed 00:08:02.842 Test: blob_simultaneous_operations ...[2024-04-27 00:27:36.399205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:02.842 [2024-04-27 00:27:36.399352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.842 [2024-04-27 00:27:36.399929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:02.842 [2024-04-27 00:27:36.399979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.842 [2024-04-27 00:27:36.402662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:02.842 [2024-04-27 00:27:36.402721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.842 [2024-04-27 00:27:36.402867] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7671:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:02.842 [2024-04-27 00:27:36.402915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7611:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:02.842 passed 00:08:03.101 Test: blob_persist_test ...passed 00:08:03.101 Test: blob_decouple_snapshot ...passed 00:08:03.101 Test: blob_seek_io_unit ...passed 00:08:03.101 Test: blob_nested_freezes ...passed 00:08:03.101 Suite: blob_blob_copy_extent 00:08:03.101 Test: blob_write ...passed 00:08:03.101 Test: blob_read ...passed 00:08:03.360 Test: blob_rw_verify ...passed 00:08:03.360 Test: blob_rw_verify_iov_nomem ...passed 00:08:03.360 Test: blob_rw_iov_read_only ...passed 00:08:03.360 Test: blob_xattr ...passed 00:08:03.360 Test: blob_dirty_shutdown ...passed 00:08:03.360 Test: blob_is_degraded ...passed 00:08:03.360 Suite: blob_esnap_bs_copy_extent 00:08:03.360 Test: blob_esnap_create ...passed 00:08:03.619 Test: blob_esnap_thread_add_remove ...passed 00:08:03.619 Test: blob_esnap_clone_snapshot ...passed 00:08:03.619 Test: blob_esnap_clone_inflate ...passed 00:08:03.619 Test: blob_esnap_clone_decouple ...passed 00:08:03.619 Test: blob_esnap_clone_reload ...passed 00:08:03.619 Test: blob_esnap_hotplug ...passed 00:08:03.619 00:08:03.619 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.619 suites 16 16 n/a 0 0 00:08:03.619 tests 352 352 352 0 0 00:08:03.619 asserts 93211 93211 93211 0 n/a 00:08:03.619 00:08:03.619 Elapsed time = 14.918 seconds 00:08:03.878 00:27:37 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:03.878 00:08:03.878 00:08:03.878 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.878 http://cunit.sourceforge.net/ 00:08:03.878 00:08:03.878 00:08:03.878 Suite: blob_bdev 00:08:03.878 Test: create_bs_dev ...passed 00:08:03.878 Test: create_bs_dev_ro ...[2024-04-27 00:27:37.283733] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:03.878 passed 00:08:03.878 Test: create_bs_dev_rw ...passed 00:08:03.878 Test: claim_bs_dev ...[2024-04-27 00:27:37.284206] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:03.878 passed 00:08:03.878 Test: claim_bs_dev_ro ...passed 00:08:03.878 Test: deferred_destroy_refs ...passed 00:08:03.878 Test: deferred_destroy_channels ...passed 00:08:03.878 Test: deferred_destroy_threads ...passed 00:08:03.878 00:08:03.878 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.878 suites 1 1 n/a 0 0 00:08:03.878 tests 8 8 8 0 0 00:08:03.878 asserts 119 119 119 0 n/a 00:08:03.878 00:08:03.878 Elapsed time = 0.001 seconds 00:08:03.878 00:27:37 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:03.878 00:08:03.878 00:08:03.878 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.878 http://cunit.sourceforge.net/ 00:08:03.878 00:08:03.878 00:08:03.878 Suite: tree 00:08:03.878 Test: blobfs_tree_op_test ...passed 00:08:03.878 00:08:03.878 Run Summary: Type Total Ran Passed Failed Inactive 00:08:03.878 suites 1 1 n/a 0 0 00:08:03.878 tests 1 1 1 0 0 00:08:03.878 asserts 27 27 27 0 n/a 00:08:03.878 00:08:03.878 Elapsed time = 0.000 seconds 00:08:03.878 00:27:37 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:03.878 00:08:03.878 00:08:03.878 CUnit - A unit testing framework for C - Version 2.1-3 00:08:03.878 http://cunit.sourceforge.net/ 00:08:03.878 00:08:03.878 00:08:03.878 Suite: blobfs_async_ut 00:08:03.878 Test: fs_init ...passed 00:08:03.878 Test: fs_open ...passed 00:08:03.878 Test: fs_create ...passed 00:08:04.137 Test: fs_truncate ...passed 00:08:04.137 Test: fs_rename ...[2024-04-27 00:27:37.485119] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:04.137 passed 00:08:04.137 Test: fs_rw_async ...passed 00:08:04.137 Test: fs_writev_readv_async ...passed 00:08:04.137 Test: tree_find_buffer_ut ...passed 00:08:04.137 Test: channel_ops ...passed 00:08:04.137 Test: channel_ops_sync ...passed 00:08:04.137 00:08:04.137 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.137 suites 1 1 n/a 0 0 00:08:04.137 tests 10 10 10 0 0 00:08:04.137 asserts 292 292 292 0 n/a 00:08:04.137 00:08:04.137 Elapsed time = 0.194 seconds 00:08:04.137 00:27:37 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:04.137 00:08:04.137 00:08:04.137 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.137 http://cunit.sourceforge.net/ 00:08:04.137 00:08:04.137 00:08:04.137 Suite: blobfs_sync_ut 00:08:04.137 Test: cache_read_after_write ...[2024-04-27 00:27:37.680062] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:04.137 passed 00:08:04.137 Test: file_length ...passed 00:08:04.396 Test: append_write_to_extend_blob ...passed 00:08:04.396 Test: partial_buffer ...passed 00:08:04.396 Test: cache_write_null_buffer ...passed 00:08:04.396 Test: fs_create_sync ...passed 00:08:04.396 Test: fs_rename_sync ...passed 00:08:04.396 Test: cache_append_no_cache ...passed 00:08:04.396 Test: fs_delete_file_without_close ...passed 00:08:04.396 00:08:04.396 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.396 suites 1 1 n/a 0 0 00:08:04.396 tests 9 9 9 0 0 00:08:04.396 asserts 345 345 345 0 n/a 00:08:04.396 00:08:04.396 Elapsed time = 0.404 seconds 00:08:04.396 00:27:37 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:04.396 00:08:04.396 00:08:04.396 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.396 http://cunit.sourceforge.net/ 00:08:04.396 00:08:04.396 00:08:04.396 Suite: blobfs_bdev_ut 00:08:04.396 Test: spdk_blobfs_bdev_detect_test ...[2024-04-27 00:27:37.885273] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:04.396 passed 00:08:04.396 Test: spdk_blobfs_bdev_create_test ...[2024-04-27 00:27:37.885782] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:04.396 passed 00:08:04.396 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:04.396 00:08:04.396 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.396 suites 1 1 n/a 0 0 00:08:04.396 tests 3 3 3 0 0 00:08:04.396 asserts 9 9 9 0 n/a 00:08:04.396 00:08:04.396 Elapsed time = 0.001 seconds 00:08:04.396 00:08:04.396 real 0m15.665s 00:08:04.396 user 0m15.122s 00:08:04.396 sys 0m0.758s 00:08:04.396 00:27:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.396 ************************************ 00:08:04.396 END TEST unittest_blob_blobfs 00:08:04.396 ************************************ 00:08:04.396 00:27:37 -- common/autotest_common.sh@10 -- # set +x 00:08:04.396 00:27:37 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:08:04.396 00:27:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.396 00:27:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.396 00:27:37 -- common/autotest_common.sh@10 -- # set +x 00:08:04.396 ************************************ 00:08:04.396 START TEST unittest_event 00:08:04.396 ************************************ 00:08:04.396 00:27:37 -- common/autotest_common.sh@1111 -- # unittest_event 00:08:04.396 00:27:37 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:04.657 00:08:04.657 00:08:04.657 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.657 http://cunit.sourceforge.net/ 00:08:04.657 00:08:04.657 00:08:04.657 Suite: app_suite 00:08:04.657 Test: test_spdk_app_parse_args ...app_ut [options] 00:08:04.657 00:08:04.657 CPU options: 00:08:04.657 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:04.657 (like [0,1,10]) 00:08:04.657 --lcores lcore to CPU mapping list. The list is in the format: 00:08:04.657 [<,lcores[@CPUs]>...] 00:08:04.657 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:04.657 Within the group, '-' is used for range separator, 00:08:04.657 ',' is used for single number separator. 00:08:04.657 '( )' can be omitted for single element group, 00:08:04.657 '@' can be omitted if cpus and lcores have the same value 00:08:04.657 --disable-cpumask-locks Disable CPU core lock files. 00:08:04.657 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:04.657 pollers in the app support interrupt mode) 00:08:04.657 -p, --main-core main (primary) core for DPDK 00:08:04.657 00:08:04.657 Configuration options: 00:08:04.657 -c, --config, --json JSON config file 00:08:04.657 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:04.657 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:04.657 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:04.657 --rpcs-allowed comma-separated list of permitted RPCS 00:08:04.657 --json-ignore-init-errors don't exit on invalid config entry 00:08:04.657 00:08:04.657 Memory options: 00:08:04.657 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:04.657 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:04.657 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:04.657 -R, --huge-unlink unlink huge files after initialization 00:08:04.657 -n, --mem-channels number of memory channels used for DPDK 00:08:04.657 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:04.657 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:04.657 --no-huge run without using hugepages 00:08:04.657 -i, --shm-id shared memory ID (optional) 00:08:04.657 -g, --single-file-segments force creating just one hugetlbfs file 00:08:04.657 00:08:04.657 PCI options: 00:08:04.657 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:04.657 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:04.657 -u, --no-pci disable PCI access 00:08:04.657 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:04.657 00:08:04.657 Log options: 00:08:04.657 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:04.657 --silence-noticelog disable notice level logging to stderr 00:08:04.657 00:08:04.657 Trace options: 00:08:04.657 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:04.657 setting 0 to disable trace (default 32768) 00:08:04.657 Tracepoints vary in size and can use more than one trace entry. 00:08:04.657 -e, --tpoint-group [:] 00:08:04.657 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:04.658 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:04.658 a tracepoint group. First tpoint inside a group can be enabled by 00:08:04.658 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:04.658 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:04.658 in /include/spdk_internal/trace_defs.h 00:08:04.658 00:08:04.658 Other options: 00:08:04.658 -h, --help show this usage 00:08:04.658 -v, --version print SPDK version 00:08:04.658 app_ut: invalid option -- 'z' 00:08:04.658 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:04.658 --env-context Opaque context for use of the env implementation 00:08:04.658 app_ut [options] 00:08:04.658 00:08:04.658 CPU options: 00:08:04.658 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:04.658 (like [0,1,10]) 00:08:04.658 --lcores lcore to CPU mapping list. The list is in the format: 00:08:04.658 [<,lcores[@CPUs]>...] 00:08:04.658 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:04.658 Within the group, '-' is used for range separator, 00:08:04.658 ',' is used for single number separator. 00:08:04.658 '( )' can be omitted for single element group, 00:08:04.658 '@' can be omitted if cpus and lcores have the same value 00:08:04.658 --disable-cpumask-locks Disable CPU core lock files. 00:08:04.658 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:04.658 pollers in the app support interrupt mode) 00:08:04.658 -p, --main-core main (primary) core for DPDK 00:08:04.658 00:08:04.658 Configuration options: 00:08:04.658 -c, --config, --json JSON config file 00:08:04.658 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:04.658 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:04.658 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:04.658 --rpcs-allowed comma-separated list of permitted RPCS 00:08:04.658 --json-ignore-init-errors don't exit on invalid config entry 00:08:04.658 00:08:04.658 Memory options: 00:08:04.658 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:04.658 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:04.658 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:04.658 -R, --huge-unlink unlink huge files after initialization 00:08:04.658 -n, --mem-channels number of memory channels used for DPDK 00:08:04.658 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:04.658 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:04.658 --no-huge run without using hugepages 00:08:04.658 -i, --shm-id shared memory ID (optional) 00:08:04.658 -g, --single-file-segments force creating just one hugetlbfs file 00:08:04.658 00:08:04.658 PCI options: 00:08:04.658 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:04.658 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:04.658 -u, --no-pci disable PCI access 00:08:04.658 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:04.658 00:08:04.658 Log options: 00:08:04.658 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:04.658 --silence-noticelog disable notice level logging to stderr 00:08:04.658 00:08:04.658 Trace options: 00:08:04.658 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:04.658 setting 0 to disable trace (default 32768) 00:08:04.658 Tracepoints vary in size and can use more than one trace entry. 00:08:04.658 -e, --tpoint-group [:] 00:08:04.658 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:04.658 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:04.658 a tracepoint group. First tpoint inside a group can be enabled by 00:08:04.658 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:04.658 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:04.658 in /include/spdk_internal/trace_defs.h 00:08:04.658 00:08:04.658 Other options: 00:08:04.658 -h, --help show this usage 00:08:04.658 -v, --version print SPDK version 00:08:04.658 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:04.658 --env-context Opaque context for use of the env implementation 00:08:04.658 app_ut: unrecognized option '--test-long-opt' 00:08:04.658 [2024-04-27 00:27:37.994844] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1105:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:04.658 app_ut [options] 00:08:04.658 00:08:04.658 CPU options: 00:08:04.658 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:04.658 (like [0,1,10]) 00:08:04.658 --lcores lcore to CPU mapping list. The list is in the format: 00:08:04.658 [<,lcores[@CPUs]>...] 00:08:04.658 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:04.658 Within the group, '-' is used for range separator, 00:08:04.658 ',' is used for single number separator. 00:08:04.658 '( )' can be omitted for single element group, 00:08:04.658 '@' can be omitted if cpus and lcores have the same value 00:08:04.658 --disable-cpumask-locks Disable CPU core lock files. 00:08:04.658 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:04.658 pollers in the app support interrupt mode) 00:08:04.658 -p, --main-core main (primary) core for DPDK 00:08:04.658 [2024-04-27 00:27:37.995135] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1286:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:04.658 00:08:04.658 Configuration options: 00:08:04.658 -c, --config, --json JSON config file 00:08:04.658 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:04.658 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:04.658 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:04.658 --rpcs-allowed comma-separated list of permitted RPCS 00:08:04.658 --json-ignore-init-errors don't exit on invalid config entry 00:08:04.658 00:08:04.658 Memory options: 00:08:04.658 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:04.658 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:04.658 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:04.658 -R, --huge-unlink unlink huge files after initialization 00:08:04.658 -n, --mem-channels number of memory channels used for DPDK 00:08:04.658 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:04.658 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:04.658 --no-huge run without using hugepages 00:08:04.658 -i, --shm-id shared memory ID (optional) 00:08:04.659 -g, --single-file-segments force creating just one hugetlbfs file 00:08:04.659 00:08:04.659 PCI options: 00:08:04.659 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:04.659 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:04.659 -u, --no-pci disable PCI access 00:08:04.659 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:04.659 00:08:04.659 Log options: 00:08:04.659 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:04.659 --silence-noticelog disable notice level logging to stderr 00:08:04.659 00:08:04.659 Trace options: 00:08:04.659 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:04.659 setting 0 to disable trace (default 32768) 00:08:04.659 Tracepoints vary in size and can use more than one trace entry. 00:08:04.659 -e, --tpoint-group [:] 00:08:04.659 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:04.659 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:04.659 a tracepoint group. First tpoint inside a group can be enabled by 00:08:04.659 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:04.659 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:04.659 in /include/spdk_internal/trace_defs.h 00:08:04.659 00:08:04.659 Other options: 00:08:04.659 -h, --help show this usage 00:08:04.659 -v, --version print SPDK version 00:08:04.659 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:04.659 --env-context Opaque context for use of the env implementation 00:08:04.659 [2024-04-27 00:27:37.995367] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:04.659 passed 00:08:04.659 00:08:04.659 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.659 suites 1 1 n/a 0 0 00:08:04.659 tests 1 1 1 0 0 00:08:04.659 asserts 8 8 8 0 n/a 00:08:04.659 00:08:04.659 Elapsed time = 0.001 seconds 00:08:04.659 00:27:38 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:04.659 00:08:04.659 00:08:04.659 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.659 http://cunit.sourceforge.net/ 00:08:04.659 00:08:04.659 00:08:04.659 Suite: app_suite 00:08:04.659 Test: test_create_reactor ...passed 00:08:04.659 Test: test_init_reactors ...passed 00:08:04.659 Test: test_event_call ...passed 00:08:04.659 Test: test_schedule_thread ...passed 00:08:04.659 Test: test_reschedule_thread ...passed 00:08:04.659 Test: test_bind_thread ...passed 00:08:04.659 Test: test_for_each_reactor ...passed 00:08:04.659 Test: test_reactor_stats ...passed 00:08:04.659 Test: test_scheduler ...passed 00:08:04.659 Test: test_governor ...passed 00:08:04.659 00:08:04.659 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.659 suites 1 1 n/a 0 0 00:08:04.659 tests 10 10 10 0 0 00:08:04.659 asserts 344 344 344 0 n/a 00:08:04.659 00:08:04.659 Elapsed time = 0.031 seconds 00:08:04.659 00:08:04.659 real 0m0.110s 00:08:04.659 user 0m0.066s 00:08:04.659 sys 0m0.044s 00:08:04.659 00:27:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.659 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:04.659 ************************************ 00:08:04.659 END TEST unittest_event 00:08:04.659 ************************************ 00:08:04.659 00:27:38 -- unit/unittest.sh@233 -- # uname -s 00:08:04.659 00:27:38 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:08:04.659 00:27:38 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:08:04.659 00:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.659 00:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.659 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:04.659 ************************************ 00:08:04.659 START TEST unittest_ftl 00:08:04.659 ************************************ 00:08:04.659 00:27:38 -- common/autotest_common.sh@1111 -- # unittest_ftl 00:08:04.659 00:27:38 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:04.659 00:08:04.659 00:08:04.659 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.659 http://cunit.sourceforge.net/ 00:08:04.659 00:08:04.659 00:08:04.659 Suite: ftl_band_suite 00:08:04.659 Test: test_band_block_offset_from_addr_base ...passed 00:08:04.919 Test: test_band_block_offset_from_addr_offset ...passed 00:08:04.919 Test: test_band_addr_from_block_offset ...passed 00:08:04.919 Test: test_band_set_addr ...passed 00:08:04.919 Test: test_invalidate_addr ...passed 00:08:04.919 Test: test_next_xfer_addr ...passed 00:08:04.919 00:08:04.919 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.919 suites 1 1 n/a 0 0 00:08:04.919 tests 6 6 6 0 0 00:08:04.919 asserts 30356 30356 30356 0 n/a 00:08:04.919 00:08:04.919 Elapsed time = 0.181 seconds 00:08:04.919 00:27:38 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:04.919 00:08:04.919 00:08:04.919 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.919 http://cunit.sourceforge.net/ 00:08:04.919 00:08:04.919 00:08:04.919 Suite: ftl_bitmap 00:08:04.919 Test: test_ftl_bitmap_create ...[2024-04-27 00:27:38.454164] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:04.919 [2024-04-27 00:27:38.454500] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:04.919 passed 00:08:04.919 Test: test_ftl_bitmap_get ...passed 00:08:04.919 Test: test_ftl_bitmap_set ...passed 00:08:04.919 Test: test_ftl_bitmap_clear ...passed 00:08:04.919 Test: test_ftl_bitmap_find_first_set ...passed 00:08:04.919 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:04.919 Test: test_ftl_bitmap_count_set ...passed 00:08:04.919 00:08:04.919 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.919 suites 1 1 n/a 0 0 00:08:04.919 tests 7 7 7 0 0 00:08:04.919 asserts 137 137 137 0 n/a 00:08:04.919 00:08:04.919 Elapsed time = 0.001 seconds 00:08:04.919 00:27:38 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:04.919 00:08:04.919 00:08:04.919 CUnit - A unit testing framework for C - Version 2.1-3 00:08:04.919 http://cunit.sourceforge.net/ 00:08:04.919 00:08:04.919 00:08:04.919 Suite: ftl_io_suite 00:08:04.919 Test: test_completion ...passed 00:08:04.919 Test: test_multiple_ios ...passed 00:08:04.919 00:08:04.919 Run Summary: Type Total Ran Passed Failed Inactive 00:08:04.919 suites 1 1 n/a 0 0 00:08:04.919 tests 2 2 2 0 0 00:08:04.919 asserts 47 47 47 0 n/a 00:08:04.919 00:08:04.919 Elapsed time = 0.004 seconds 00:08:05.178 00:27:38 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:05.178 00:08:05.178 00:08:05.178 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.178 http://cunit.sourceforge.net/ 00:08:05.178 00:08:05.178 00:08:05.178 Suite: ftl_mngt 00:08:05.178 Test: test_next_step ...passed 00:08:05.178 Test: test_continue_step ...passed 00:08:05.178 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:05.178 Test: test_fail_step ...passed 00:08:05.178 Test: test_mngt_call_and_call_rollback ...passed 00:08:05.178 Test: test_nested_process_failure ...passed 00:08:05.178 00:08:05.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.178 suites 1 1 n/a 0 0 00:08:05.178 tests 6 6 6 0 0 00:08:05.178 asserts 176 176 176 0 n/a 00:08:05.178 00:08:05.178 Elapsed time = 0.001 seconds 00:08:05.178 00:27:38 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:05.178 00:08:05.178 00:08:05.178 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.178 http://cunit.sourceforge.net/ 00:08:05.178 00:08:05.178 00:08:05.178 Suite: ftl_mempool 00:08:05.178 Test: test_ftl_mempool_create ...passed 00:08:05.178 Test: test_ftl_mempool_get_put ...passed 00:08:05.178 00:08:05.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.178 suites 1 1 n/a 0 0 00:08:05.178 tests 2 2 2 0 0 00:08:05.178 asserts 36 36 36 0 n/a 00:08:05.178 00:08:05.178 Elapsed time = 0.000 seconds 00:08:05.178 00:27:38 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:05.178 00:08:05.178 00:08:05.178 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.178 http://cunit.sourceforge.net/ 00:08:05.178 00:08:05.178 00:08:05.178 Suite: ftl_addr64_suite 00:08:05.178 Test: test_addr_cached ...passed 00:08:05.178 00:08:05.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.178 suites 1 1 n/a 0 0 00:08:05.178 tests 1 1 1 0 0 00:08:05.178 asserts 1536 1536 1536 0 n/a 00:08:05.178 00:08:05.178 Elapsed time = 0.000 seconds 00:08:05.178 00:27:38 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:05.178 00:08:05.178 00:08:05.178 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.178 http://cunit.sourceforge.net/ 00:08:05.178 00:08:05.178 00:08:05.178 Suite: ftl_sb 00:08:05.178 Test: test_sb_crc_v2 ...passed 00:08:05.178 Test: test_sb_crc_v3 ...passed 00:08:05.178 Test: test_sb_v3_md_layout ...[2024-04-27 00:27:38.593725] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:05.178 [2024-04-27 00:27:38.594204] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:05.178 [2024-04-27 00:27:38.594300] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:05.178 [2024-04-27 00:27:38.594391] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:05.178 [2024-04-27 00:27:38.594453] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:05.178 [2024-04-27 00:27:38.594573] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:05.178 [2024-04-27 00:27:38.594629] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:05.178 [2024-04-27 00:27:38.594700] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:05.178 [2024-04-27 00:27:38.594809] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:05.178 [2024-04-27 00:27:38.594882] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:05.178 passed 00:08:05.178 Test: test_sb_v5_md_layout ...[2024-04-27 00:27:38.594938] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:05.178 passed 00:08:05.178 00:08:05.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.178 suites 1 1 n/a 0 0 00:08:05.178 tests 4 4 4 0 0 00:08:05.178 asserts 148 148 148 0 n/a 00:08:05.178 00:08:05.178 Elapsed time = 0.003 seconds 00:08:05.178 00:27:38 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:05.178 00:08:05.178 00:08:05.178 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.178 http://cunit.sourceforge.net/ 00:08:05.178 00:08:05.178 00:08:05.178 Suite: ftl_layout_upgrade 00:08:05.178 Test: test_l2p_upgrade ...passed 00:08:05.178 00:08:05.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.178 suites 1 1 n/a 0 0 00:08:05.178 tests 1 1 1 0 0 00:08:05.178 asserts 140 140 140 0 n/a 00:08:05.178 00:08:05.178 Elapsed time = 0.001 seconds 00:08:05.178 00:08:05.178 real 0m0.461s 00:08:05.178 user 0m0.195s 00:08:05.178 sys 0m0.267s 00:08:05.178 00:27:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.178 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.178 ************************************ 00:08:05.178 END TEST unittest_ftl 00:08:05.178 ************************************ 00:08:05.178 00:27:38 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:05.178 00:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.178 00:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.178 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.178 ************************************ 00:08:05.178 START TEST unittest_accel 00:08:05.178 ************************************ 00:08:05.178 00:27:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:05.178 00:08:05.178 00:08:05.178 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.178 http://cunit.sourceforge.net/ 00:08:05.178 00:08:05.178 00:08:05.178 Suite: accel_sequence 00:08:05.178 Test: test_sequence_fill_copy ...passed 00:08:05.178 Test: test_sequence_abort ...passed 00:08:05.178 Test: test_sequence_append_error ...passed 00:08:05.178 Test: test_sequence_completion_error ...[2024-04-27 00:27:38.724316] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f22e1cc27c0 00:08:05.178 [2024-04-27 00:27:38.724678] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1934:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f22e1cc27c0 00:08:05.178 passed 00:08:05.178 Test: test_sequence_decompress ...[2024-04-27 00:27:38.724762] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f22e1cc27c0 00:08:05.178 [2024-04-27 00:27:38.724826] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1844:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f22e1cc27c0 00:08:05.178 passed 00:08:05.178 Test: test_sequence_reverse ...passed 00:08:05.178 Test: test_sequence_copy_elision ...passed 00:08:05.178 Test: test_sequence_accel_buffers ...passed 00:08:05.178 Test: test_sequence_memory_domain ...[2024-04-27 00:27:38.735827] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1736:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:05.178 [2024-04-27 00:27:38.736004] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1775:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:05.178 passed 00:08:05.178 Test: test_sequence_module_memory_domain ...passed 00:08:05.178 Test: test_sequence_crypto ...passed 00:08:05.178 Test: test_sequence_driver ...[2024-04-27 00:27:38.742444] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1883:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f22e107a7c0 using driver: ut 00:08:05.178 [2024-04-27 00:27:38.742546] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1947:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f22e107a7c0 through driver: ut 00:08:05.178 passed 00:08:05.178 Test: test_sequence_same_iovs ...passed 00:08:05.178 Test: test_sequence_crc32 ...passed 00:08:05.178 Suite: accel 00:08:05.178 Test: test_spdk_accel_task_complete ...passed 00:08:05.178 Test: test_get_task ...passed 00:08:05.178 Test: test_spdk_accel_submit_copy ...passed 00:08:05.178 Test: test_spdk_accel_submit_dualcast ...[2024-04-27 00:27:38.747424] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:05.178 passed 00:08:05.178 Test: test_spdk_accel_submit_compare ...[2024-04-27 00:27:38.747497] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 433:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:05.178 passed 00:08:05.178 Test: test_spdk_accel_submit_fill ...passed 00:08:05.178 Test: test_spdk_accel_submit_crc32c ...passed 00:08:05.178 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:05.178 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:05.178 Test: test_spdk_accel_submit_xor ...passed 00:08:05.178 Test: test_spdk_accel_module_find_by_name ...passed 00:08:05.178 Test: test_spdk_accel_module_register ...passed 00:08:05.178 00:08:05.178 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.178 suites 2 2 n/a 0 0 00:08:05.178 tests 26 26 26 0 0 00:08:05.178 asserts 831 831 831 0 n/a 00:08:05.178 00:08:05.178 Elapsed time = 0.034 seconds 00:08:05.436 00:08:05.436 real 0m0.077s 00:08:05.436 user 0m0.037s 00:08:05.436 sys 0m0.040s 00:08:05.436 00:27:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.436 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.436 ************************************ 00:08:05.436 END TEST unittest_accel 00:08:05.436 ************************************ 00:08:05.436 00:27:38 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:05.437 00:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.437 00:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.437 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 ************************************ 00:08:05.437 START TEST unittest_ioat 00:08:05.437 ************************************ 00:08:05.437 00:27:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:05.437 00:08:05.437 00:08:05.437 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.437 http://cunit.sourceforge.net/ 00:08:05.437 00:08:05.437 00:08:05.437 Suite: ioat 00:08:05.437 Test: ioat_state_check ...passed 00:08:05.437 00:08:05.437 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.437 suites 1 1 n/a 0 0 00:08:05.437 tests 1 1 1 0 0 00:08:05.437 asserts 32 32 32 0 n/a 00:08:05.437 00:08:05.437 Elapsed time = 0.000 seconds 00:08:05.437 00:08:05.437 real 0m0.022s 00:08:05.437 user 0m0.008s 00:08:05.437 sys 0m0.015s 00:08:05.437 00:27:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.437 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 ************************************ 00:08:05.437 END TEST unittest_ioat 00:08:05.437 ************************************ 00:08:05.437 00:27:38 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:05.437 00:27:38 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:05.437 00:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.437 00:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.437 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 ************************************ 00:08:05.437 START TEST unittest_idxd_user 00:08:05.437 ************************************ 00:08:05.437 00:27:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:05.437 00:08:05.437 00:08:05.437 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.437 http://cunit.sourceforge.net/ 00:08:05.437 00:08:05.437 00:08:05.437 Suite: idxd_user 00:08:05.437 Test: test_idxd_wait_cmd ...[2024-04-27 00:27:38.933233] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:05.437 [2024-04-27 00:27:38.933589] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:05.437 passed 00:08:05.437 Test: test_idxd_reset_dev ...[2024-04-27 00:27:38.934124] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:05.437 [2024-04-27 00:27:38.934287] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:05.437 passed 00:08:05.437 Test: test_idxd_group_config ...passed 00:08:05.437 Test: test_idxd_wq_config ...passed 00:08:05.437 00:08:05.437 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.437 suites 1 1 n/a 0 0 00:08:05.437 tests 4 4 4 0 0 00:08:05.437 asserts 20 20 20 0 n/a 00:08:05.437 00:08:05.437 Elapsed time = 0.001 seconds 00:08:05.437 00:08:05.437 real 0m0.022s 00:08:05.437 user 0m0.010s 00:08:05.437 sys 0m0.011s 00:08:05.437 00:27:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.437 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 ************************************ 00:08:05.437 END TEST unittest_idxd_user 00:08:05.437 ************************************ 00:08:05.437 00:27:38 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:08:05.437 00:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.437 00:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.437 00:27:38 -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 ************************************ 00:08:05.437 START TEST unittest_iscsi 00:08:05.437 ************************************ 00:08:05.437 00:27:39 -- common/autotest_common.sh@1111 -- # unittest_iscsi 00:08:05.437 00:27:39 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:05.437 00:08:05.437 00:08:05.437 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.437 http://cunit.sourceforge.net/ 00:08:05.437 00:08:05.437 00:08:05.437 Suite: conn_suite 00:08:05.437 Test: read_task_split_in_order_case ...passed 00:08:05.437 Test: read_task_split_reverse_order_case ...passed 00:08:05.437 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:05.437 Test: process_non_read_task_completion_test ...passed 00:08:05.437 Test: free_tasks_on_connection ...passed 00:08:05.437 Test: free_tasks_with_queued_datain ...passed 00:08:05.437 Test: abort_queued_datain_task_test ...passed 00:08:05.437 Test: abort_queued_datain_tasks_test ...passed 00:08:05.437 00:08:05.696 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.696 suites 1 1 n/a 0 0 00:08:05.696 tests 8 8 8 0 0 00:08:05.696 asserts 230 230 230 0 n/a 00:08:05.696 00:08:05.696 Elapsed time = 0.000 seconds 00:08:05.696 00:27:39 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:05.696 00:08:05.696 00:08:05.696 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.696 http://cunit.sourceforge.net/ 00:08:05.696 00:08:05.696 00:08:05.696 Suite: iscsi_suite 00:08:05.696 Test: param_negotiation_test ...passed 00:08:05.696 Test: list_negotiation_test ...passed 00:08:05.696 Test: parse_valid_test ...passed 00:08:05.696 Test: parse_invalid_test ...[2024-04-27 00:27:39.051141] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:05.696 [2024-04-27 00:27:39.051449] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:05.696 [2024-04-27 00:27:39.051501] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:08:05.696 [2024-04-27 00:27:39.051568] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:05.696 [2024-04-27 00:27:39.051719] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:05.696 [2024-04-27 00:27:39.051777] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:05.696 passed[2024-04-27 00:27:39.051904] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:05.696 00:08:05.696 00:08:05.696 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.696 suites 1 1 n/a 0 0 00:08:05.696 tests 4 4 4 0 0 00:08:05.696 asserts 161 161 161 0 n/a 00:08:05.696 00:08:05.696 Elapsed time = 0.004 seconds 00:08:05.696 00:27:39 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:05.696 00:08:05.696 00:08:05.696 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.696 http://cunit.sourceforge.net/ 00:08:05.696 00:08:05.696 00:08:05.696 Suite: iscsi_target_node_suite 00:08:05.696 Test: add_lun_test_cases ...[2024-04-27 00:27:39.079283] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:05.696 [2024-04-27 00:27:39.079620] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:05.696 [2024-04-27 00:27:39.079725] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:05.696 [2024-04-27 00:27:39.079778] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:05.696 [2024-04-27 00:27:39.079839] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:05.696 passed 00:08:05.696 Test: allow_any_allowed ...passed 00:08:05.696 Test: allow_ipv6_allowed ...passed 00:08:05.696 Test: allow_ipv6_denied ...passed 00:08:05.696 Test: allow_ipv6_invalid ...passed 00:08:05.696 Test: allow_ipv4_allowed ...passed 00:08:05.696 Test: allow_ipv4_denied ...passed 00:08:05.696 Test: allow_ipv4_invalid ...passed 00:08:05.696 Test: node_access_allowed ...passed 00:08:05.696 Test: node_access_denied_by_empty_netmask ...passed 00:08:05.697 Test: node_access_multi_initiator_groups_cases ...passed 00:08:05.697 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:05.697 Test: chap_param_test_cases ...[2024-04-27 00:27:39.080277] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:05.697 [2024-04-27 00:27:39.080329] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:05.697 [2024-04-27 00:27:39.080392] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:05.697 passed[2024-04-27 00:27:39.080434] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:05.697 [2024-04-27 00:27:39.080482] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:05.697 00:08:05.697 00:08:05.697 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.697 suites 1 1 n/a 0 0 00:08:05.697 tests 13 13 13 0 0 00:08:05.697 asserts 50 50 50 0 n/a 00:08:05.697 00:08:05.697 Elapsed time = 0.001 seconds 00:08:05.697 00:27:39 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:05.697 00:08:05.697 00:08:05.697 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.697 http://cunit.sourceforge.net/ 00:08:05.697 00:08:05.697 00:08:05.697 Suite: iscsi_suite 00:08:05.697 Test: op_login_check_target_test ...[2024-04-27 00:27:39.107969] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:08:05.697 passed 00:08:05.697 Test: op_login_session_normal_test ...[2024-04-27 00:27:39.108300] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:05.697 [2024-04-27 00:27:39.108359] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:05.697 [2024-04-27 00:27:39.108406] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:05.697 [2024-04-27 00:27:39.108461] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:05.697 [2024-04-27 00:27:39.108556] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:05.697 [2024-04-27 00:27:39.108665] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:05.697 passed 00:08:05.697 Test: maxburstlength_test ...[2024-04-27 00:27:39.108730] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:05.697 [2024-04-27 00:27:39.108949] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:05.697 passed 00:08:05.697 Test: underflow_for_read_transfer_test ...[2024-04-27 00:27:39.109010] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:05.697 passed 00:08:05.697 Test: underflow_for_zero_read_transfer_test ...passed 00:08:05.697 Test: underflow_for_request_sense_test ...passed 00:08:05.697 Test: underflow_for_check_condition_test ...passed 00:08:05.697 Test: add_transfer_task_test ...passed 00:08:05.697 Test: get_transfer_task_test ...passed 00:08:05.697 Test: del_transfer_task_test ...passed 00:08:05.697 Test: clear_all_transfer_tasks_test ...passed 00:08:05.697 Test: build_iovs_test ...passed 00:08:05.697 Test: build_iovs_with_md_test ...passed 00:08:05.697 Test: pdu_hdr_op_login_test ...[2024-04-27 00:27:39.110395] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:05.697 [2024-04-27 00:27:39.110515] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:05.697 [2024-04-27 00:27:39.110608] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:05.697 passed 00:08:05.697 Test: pdu_hdr_op_text_test ...[2024-04-27 00:27:39.110724] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:05.697 [2024-04-27 00:27:39.110828] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:05.697 [2024-04-27 00:27:39.110882] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:05.697 passed 00:08:05.697 Test: pdu_hdr_op_logout_test ...[2024-04-27 00:27:39.110956] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:05.697 passed 00:08:05.697 Test: pdu_hdr_op_scsi_test ...[2024-04-27 00:27:39.111114] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:05.697 [2024-04-27 00:27:39.111160] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:05.697 [2024-04-27 00:27:39.111207] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:05.697 [2024-04-27 00:27:39.111298] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:05.697 [2024-04-27 00:27:39.111395] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:05.697 [2024-04-27 00:27:39.111566] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:05.697 passed 00:08:05.697 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-27 00:27:39.111663] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:05.697 [2024-04-27 00:27:39.111728] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:05.697 passed 00:08:05.697 Test: pdu_hdr_op_nopout_test ...[2024-04-27 00:27:39.111947] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:05.697 [2024-04-27 00:27:39.112027] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:05.697 [2024-04-27 00:27:39.112065] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:05.697 [2024-04-27 00:27:39.112106] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:05.697 passed 00:08:05.697 Test: pdu_hdr_op_data_test ...[2024-04-27 00:27:39.112152] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:05.697 [2024-04-27 00:27:39.112221] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:05.697 [2024-04-27 00:27:39.112302] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:05.697 [2024-04-27 00:27:39.112361] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:05.697 [2024-04-27 00:27:39.112426] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:05.697 [2024-04-27 00:27:39.112524] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:05.697 [2024-04-27 00:27:39.112572] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:05.697 passed 00:08:05.697 Test: empty_text_with_cbit_test ...passed 00:08:05.697 Test: pdu_payload_read_test ...[2024-04-27 00:27:39.114543] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:05.697 passed 00:08:05.697 Test: data_out_pdu_sequence_test ...passed 00:08:05.697 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:05.697 00:08:05.697 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.697 suites 1 1 n/a 0 0 00:08:05.697 tests 24 24 24 0 0 00:08:05.697 asserts 150253 150253 150253 0 n/a 00:08:05.697 00:08:05.697 Elapsed time = 0.016 seconds 00:08:05.697 00:27:39 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:05.697 00:08:05.697 00:08:05.697 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.697 http://cunit.sourceforge.net/ 00:08:05.697 00:08:05.697 00:08:05.697 Suite: init_grp_suite 00:08:05.697 Test: create_initiator_group_success_case ...passed 00:08:05.697 Test: find_initiator_group_success_case ...passed 00:08:05.697 Test: register_initiator_group_twice_case ...passed 00:08:05.697 Test: add_initiator_name_success_case ...passed 00:08:05.697 Test: add_initiator_name_fail_case ...[2024-04-27 00:27:39.152595] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:05.697 passed 00:08:05.697 Test: delete_all_initiator_names_success_case ...passed 00:08:05.697 Test: add_netmask_success_case ...passed 00:08:05.697 Test: add_netmask_fail_case ...[2024-04-27 00:27:39.153061] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:05.697 passed 00:08:05.697 Test: delete_all_netmasks_success_case ...passed 00:08:05.697 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:05.697 Test: netmask_overwrite_all_to_any_case ...passed 00:08:05.697 Test: add_delete_initiator_names_case ...passed 00:08:05.697 Test: add_duplicated_initiator_names_case ...passed 00:08:05.697 Test: delete_nonexisting_initiator_names_case ...passed 00:08:05.697 Test: add_delete_netmasks_case ...passed 00:08:05.697 Test: add_duplicated_netmasks_case ...passed 00:08:05.697 Test: delete_nonexisting_netmasks_case ...passed 00:08:05.697 00:08:05.697 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.697 suites 1 1 n/a 0 0 00:08:05.697 tests 17 17 17 0 0 00:08:05.697 asserts 108 108 108 0 n/a 00:08:05.697 00:08:05.698 Elapsed time = 0.001 seconds 00:08:05.698 00:27:39 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:05.698 00:08:05.698 00:08:05.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.698 http://cunit.sourceforge.net/ 00:08:05.698 00:08:05.698 00:08:05.698 Suite: portal_grp_suite 00:08:05.698 Test: portal_create_ipv4_normal_case ...passed 00:08:05.698 Test: portal_create_ipv6_normal_case ...passed 00:08:05.698 Test: portal_create_ipv4_wildcard_case ...passed 00:08:05.698 Test: portal_create_ipv6_wildcard_case ...passed 00:08:05.698 Test: portal_create_twice_case ...[2024-04-27 00:27:39.177696] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:05.698 passed 00:08:05.698 Test: portal_grp_register_unregister_case ...passed 00:08:05.698 Test: portal_grp_register_twice_case ...passed 00:08:05.698 Test: portal_grp_add_delete_case ...passed 00:08:05.698 Test: portal_grp_add_delete_twice_case ...passed 00:08:05.698 00:08:05.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.698 suites 1 1 n/a 0 0 00:08:05.698 tests 9 9 9 0 0 00:08:05.698 asserts 44 44 44 0 n/a 00:08:05.698 00:08:05.698 Elapsed time = 0.003 seconds 00:08:05.698 00:08:05.698 real 0m0.185s 00:08:05.698 user 0m0.106s 00:08:05.698 sys 0m0.081s 00:08:05.698 00:27:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.698 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 ************************************ 00:08:05.698 END TEST unittest_iscsi 00:08:05.698 ************************************ 00:08:05.698 00:27:39 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:08:05.698 00:27:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.698 00:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.698 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.698 ************************************ 00:08:05.698 START TEST unittest_json 00:08:05.698 ************************************ 00:08:05.698 00:27:39 -- common/autotest_common.sh@1111 -- # unittest_json 00:08:05.698 00:27:39 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:05.698 00:08:05.698 00:08:05.698 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.698 http://cunit.sourceforge.net/ 00:08:05.698 00:08:05.698 00:08:05.698 Suite: json 00:08:05.698 Test: test_parse_literal ...passed 00:08:05.698 Test: test_parse_string_simple ...passed 00:08:05.698 Test: test_parse_string_control_chars ...passed 00:08:05.698 Test: test_parse_string_utf8 ...passed 00:08:05.698 Test: test_parse_string_escapes_twochar ...passed 00:08:05.698 Test: test_parse_string_escapes_unicode ...passed 00:08:05.698 Test: test_parse_number ...passed 00:08:05.698 Test: test_parse_array ...passed 00:08:05.698 Test: test_parse_object ...passed 00:08:05.698 Test: test_parse_nesting ...passed 00:08:05.698 Test: test_parse_comment ...passed 00:08:05.698 00:08:05.698 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.698 suites 1 1 n/a 0 0 00:08:05.698 tests 11 11 11 0 0 00:08:05.698 asserts 1516 1516 1516 0 n/a 00:08:05.698 00:08:05.698 Elapsed time = 0.001 seconds 00:08:05.698 00:27:39 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:05.956 00:08:05.956 00:08:05.956 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.956 http://cunit.sourceforge.net/ 00:08:05.956 00:08:05.956 00:08:05.956 Suite: json 00:08:05.956 Test: test_strequal ...passed 00:08:05.956 Test: test_num_to_uint16 ...passed 00:08:05.956 Test: test_num_to_int32 ...passed 00:08:05.956 Test: test_num_to_uint64 ...passed 00:08:05.956 Test: test_decode_object ...passed 00:08:05.956 Test: test_decode_array ...passed 00:08:05.956 Test: test_decode_bool ...passed 00:08:05.956 Test: test_decode_uint16 ...passed 00:08:05.956 Test: test_decode_int32 ...passed 00:08:05.956 Test: test_decode_uint32 ...passed 00:08:05.956 Test: test_decode_uint64 ...passed 00:08:05.956 Test: test_decode_string ...passed 00:08:05.956 Test: test_decode_uuid ...passed 00:08:05.956 Test: test_find ...passed 00:08:05.956 Test: test_find_array ...passed 00:08:05.956 Test: test_iterating ...passed 00:08:05.956 Test: test_free_object ...passed 00:08:05.956 00:08:05.956 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.956 suites 1 1 n/a 0 0 00:08:05.956 tests 17 17 17 0 0 00:08:05.956 asserts 236 236 236 0 n/a 00:08:05.956 00:08:05.956 Elapsed time = 0.001 seconds 00:08:05.956 00:27:39 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:05.956 00:08:05.956 00:08:05.956 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.956 http://cunit.sourceforge.net/ 00:08:05.956 00:08:05.956 00:08:05.956 Suite: json 00:08:05.956 Test: test_write_literal ...passed 00:08:05.956 Test: test_write_string_simple ...passed 00:08:05.956 Test: test_write_string_escapes ...passed 00:08:05.956 Test: test_write_string_utf16le ...passed 00:08:05.956 Test: test_write_number_int32 ...passed 00:08:05.956 Test: test_write_number_uint32 ...passed 00:08:05.956 Test: test_write_number_uint128 ...passed 00:08:05.956 Test: test_write_string_number_uint128 ...passed 00:08:05.956 Test: test_write_number_int64 ...passed 00:08:05.956 Test: test_write_number_uint64 ...passed 00:08:05.956 Test: test_write_number_double ...passed 00:08:05.956 Test: test_write_uuid ...passed 00:08:05.956 Test: test_write_array ...passed 00:08:05.956 Test: test_write_object ...passed 00:08:05.956 Test: test_write_nesting ...passed 00:08:05.956 Test: test_write_val ...passed 00:08:05.956 00:08:05.956 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.956 suites 1 1 n/a 0 0 00:08:05.956 tests 16 16 16 0 0 00:08:05.956 asserts 918 918 918 0 n/a 00:08:05.956 00:08:05.956 Elapsed time = 0.004 seconds 00:08:05.956 00:27:39 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:05.957 00:08:05.957 00:08:05.957 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.957 http://cunit.sourceforge.net/ 00:08:05.957 00:08:05.957 00:08:05.957 Suite: jsonrpc 00:08:05.957 Test: test_parse_request ...passed 00:08:05.957 Test: test_parse_request_streaming ...passed 00:08:05.957 00:08:05.957 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.957 suites 1 1 n/a 0 0 00:08:05.957 tests 2 2 2 0 0 00:08:05.957 asserts 289 289 289 0 n/a 00:08:05.957 00:08:05.957 Elapsed time = 0.003 seconds 00:08:05.957 00:08:05.957 real 0m0.101s 00:08:05.957 user 0m0.048s 00:08:05.957 sys 0m0.053s 00:08:05.957 00:27:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.957 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 ************************************ 00:08:05.957 END TEST unittest_json 00:08:05.957 ************************************ 00:08:05.957 00:27:39 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:08:05.957 00:27:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.957 00:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.957 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 ************************************ 00:08:05.957 START TEST unittest_rpc 00:08:05.957 ************************************ 00:08:05.957 00:27:39 -- common/autotest_common.sh@1111 -- # unittest_rpc 00:08:05.957 00:27:39 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:05.957 00:08:05.957 00:08:05.957 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.957 http://cunit.sourceforge.net/ 00:08:05.957 00:08:05.957 00:08:05.957 Suite: rpc 00:08:05.957 Test: test_jsonrpc_handler ...passed 00:08:05.957 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:05.957 Test: test_rpc_get_methods ...[2024-04-27 00:27:39.425079] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:05.957 passed 00:08:05.957 Test: test_rpc_spdk_get_version ...passed 00:08:05.957 Test: test_spdk_rpc_listen_close ...passed 00:08:05.957 Test: test_rpc_run_multiple_servers ...passed 00:08:05.957 00:08:05.957 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.957 suites 1 1 n/a 0 0 00:08:05.957 tests 6 6 6 0 0 00:08:05.957 asserts 23 23 23 0 n/a 00:08:05.957 00:08:05.957 Elapsed time = 0.000 seconds 00:08:05.957 00:08:05.957 real 0m0.023s 00:08:05.957 user 0m0.016s 00:08:05.957 sys 0m0.008s 00:08:05.957 00:27:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.957 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 ************************************ 00:08:05.957 END TEST unittest_rpc 00:08:05.957 ************************************ 00:08:05.957 00:27:39 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:05.957 00:27:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.957 00:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.957 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 ************************************ 00:08:05.957 START TEST unittest_notify 00:08:05.957 ************************************ 00:08:05.957 00:27:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:05.957 00:08:05.957 00:08:05.957 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.957 http://cunit.sourceforge.net/ 00:08:05.957 00:08:05.957 00:08:05.957 Suite: app_suite 00:08:05.957 Test: notify ...passed 00:08:05.957 00:08:05.957 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.957 suites 1 1 n/a 0 0 00:08:05.957 tests 1 1 1 0 0 00:08:05.957 asserts 13 13 13 0 n/a 00:08:05.957 00:08:05.957 Elapsed time = 0.000 seconds 00:08:05.957 00:08:05.957 real 0m0.022s 00:08:05.957 user 0m0.009s 00:08:05.957 sys 0m0.013s 00:08:05.957 00:27:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.957 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 ************************************ 00:08:05.957 END TEST unittest_notify 00:08:05.957 ************************************ 00:08:06.216 00:27:39 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:08:06.216 00:27:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:06.216 00:27:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.216 00:27:39 -- common/autotest_common.sh@10 -- # set +x 00:08:06.216 ************************************ 00:08:06.216 START TEST unittest_nvme 00:08:06.216 ************************************ 00:08:06.216 00:27:39 -- common/autotest_common.sh@1111 -- # unittest_nvme 00:08:06.216 00:27:39 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:06.216 00:08:06.216 00:08:06.216 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.216 http://cunit.sourceforge.net/ 00:08:06.216 00:08:06.216 00:08:06.216 Suite: nvme 00:08:06.216 Test: test_opc_data_transfer ...passed 00:08:06.216 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:06.216 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:06.216 Test: test_trid_parse_and_compare ...[2024-04-27 00:27:39.594612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1171:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:06.216 [2024-04-27 00:27:39.594972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:06.216 [2024-04-27 00:27:39.595102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1183:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:06.216 [2024-04-27 00:27:39.595155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:06.216 [2024-04-27 00:27:39.595206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1194:parse_next_key: *ERROR*: Key without value 00:08:06.216 [2024-04-27 00:27:39.595328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1228:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:06.216 passed 00:08:06.216 Test: test_trid_trtype_str ...passed 00:08:06.216 Test: test_trid_adrfam_str ...passed 00:08:06.216 Test: test_nvme_ctrlr_probe ...[2024-04-27 00:27:39.595631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:06.216 passed 00:08:06.216 Test: test_spdk_nvme_probe ...[2024-04-27 00:27:39.595753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:06.216 [2024-04-27 00:27:39.595804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:06.216 [2024-04-27 00:27:39.595938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:06.216 passed 00:08:06.216 Test: test_spdk_nvme_connect ...[2024-04-27 00:27:39.595999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:06.216 [2024-04-27 00:27:39.596115] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 993:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:06.216 [2024-04-27 00:27:39.596543] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:06.216 passed 00:08:06.216 Test: test_nvme_ctrlr_probe_internal ...[2024-04-27 00:27:39.596647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1004:spdk_nvme_connect: *ERROR*: Create probe context failed 00:08:06.216 [2024-04-27 00:27:39.596805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:06.216 passed 00:08:06.216 Test: test_nvme_init_controllers ...[2024-04-27 00:27:39.596873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:06.216 [2024-04-27 00:27:39.596969] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:06.216 passed 00:08:06.216 Test: test_nvme_driver_init ...[2024-04-27 00:27:39.597106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:06.216 [2024-04-27 00:27:39.597174] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:06.216 [2024-04-27 00:27:39.705735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:06.216 passed 00:08:06.216 Test: test_spdk_nvme_detach ...passed 00:08:06.216 Test: test_nvme_completion_poll_cb ...[2024-04-27 00:27:39.705936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:06.216 passed 00:08:06.216 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:06.216 Test: test_nvme_allocate_request_null ...passed 00:08:06.216 Test: test_nvme_allocate_request ...passed 00:08:06.216 Test: test_nvme_free_request ...passed 00:08:06.216 Test: test_nvme_allocate_request_user_copy ...passed 00:08:06.216 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:06.216 Test: test_nvme_request_check_timeout ...passed 00:08:06.216 Test: test_nvme_wait_for_completion ...passed 00:08:06.216 Test: test_spdk_nvme_parse_func ...passed 00:08:06.216 Test: test_spdk_nvme_detach_async ...passed 00:08:06.216 Test: test_nvme_parse_addr ...[2024-04-27 00:27:39.706538] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1581:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:06.216 passed 00:08:06.216 00:08:06.216 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.216 suites 1 1 n/a 0 0 00:08:06.216 tests 25 25 25 0 0 00:08:06.216 asserts 326 326 326 0 n/a 00:08:06.216 00:08:06.216 Elapsed time = 0.006 seconds 00:08:06.216 00:27:39 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:06.216 00:08:06.216 00:08:06.216 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.216 http://cunit.sourceforge.net/ 00:08:06.216 00:08:06.216 00:08:06.216 Suite: nvme_ctrlr 00:08:06.216 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-27 00:27:39.744106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 passed 00:08:06.216 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-27 00:27:39.745812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 passed 00:08:06.216 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-27 00:27:39.747156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 passed 00:08:06.216 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-27 00:27:39.748363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 passed 00:08:06.216 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-27 00:27:39.749654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 [2024-04-27 00:27:39.750869] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 00:27:39.752095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 00:27:39.753305] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:06.216 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-27 00:27:39.755854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 [2024-04-27 00:27:39.758229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 00:27:39.759541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:06.216 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-27 00:27:39.761992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 [2024-04-27 00:27:39.763219] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-27 00:27:39.765660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3946:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:06.216 Test: test_nvme_ctrlr_init_delay ...[2024-04-27 00:27:39.768270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 passed 00:08:06.216 Test: test_alloc_io_qpair_rr_1 ...[2024-04-27 00:27:39.769667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 [2024-04-27 00:27:39.769868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:06.216 [2024-04-27 00:27:39.770121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:06.216 [2024-04-27 00:27:39.770211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:06.216 [2024-04-27 00:27:39.770272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 398:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:06.216 passed 00:08:06.216 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:06.216 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:06.216 Test: test_alloc_io_qpair_wrr_1 ...[2024-04-27 00:27:39.770428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.216 passed 00:08:06.217 Test: test_alloc_io_qpair_wrr_2 ...[2024-04-27 00:27:39.770654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.217 [2024-04-27 00:27:39.770806] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:06.217 passed 00:08:06.217 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-27 00:27:39.771106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4857:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:06.217 [2024-04-27 00:27:39.771296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:06.217 [2024-04-27 00:27:39.771423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4934:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:06.217 [2024-04-27 00:27:39.771502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4894:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:06.217 passed 00:08:06.217 Test: test_nvme_ctrlr_fail ...[2024-04-27 00:27:39.771572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:06.217 passed 00:08:06.217 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:06.217 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:06.217 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:06.217 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-27 00:27:39.771899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.475 passed 00:08:06.475 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:06.475 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:06.475 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:06.475 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-27 00:27:40.032237] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.475 passed 00:08:06.475 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-27 00:27:40.039528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.475 passed 00:08:06.475 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-27 00:27:40.040804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.475 [2024-04-27 00:27:40.040882] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2882:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:06.475 passed 00:08:06.475 Test: test_alloc_io_qpair_fail ...[2024-04-27 00:27:40.042029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.475 passed 00:08:06.475 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:06.475 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-04-27 00:27:40.042146] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 510:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:06.475 passed 00:08:06.475 Test: test_nvme_ctrlr_set_state ...passed 00:08:06.475 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-27 00:27:40.042283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:06.475 [2024-04-27 00:27:40.042329] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.475 passed 00:08:06.475 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-27 00:27:40.060045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-27 00:27:40.096209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_reset ...[2024-04-27 00:27:40.097756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_aer_callback ...[2024-04-27 00:27:40.098091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-27 00:27:40.099513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:06.735 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:06.735 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-27 00:27:40.101258] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:06.735 Test: test_nvme_ctrlr_ana_resize ...[2024-04-27 00:27:40.102636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:06.735 Test: test_nvme_transport_ctrlr_ready ...[2024-04-27 00:27:40.104160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:06.735 [2024-04-27 00:27:40.104211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4079:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:08:06.735 passed 00:08:06.735 Test: test_nvme_ctrlr_disable ...[2024-04-27 00:27:40.104251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4147:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:06.735 passed 00:08:06.735 00:08:06.735 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.735 suites 1 1 n/a 0 0 00:08:06.735 tests 43 43 43 0 0 00:08:06.735 asserts 10418 10418 10418 0 n/a 00:08:06.735 00:08:06.735 Elapsed time = 0.320 seconds 00:08:06.735 00:27:40 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:06.735 00:08:06.735 00:08:06.735 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.735 http://cunit.sourceforge.net/ 00:08:06.735 00:08:06.735 00:08:06.735 Suite: nvme_ctrlr_cmd 00:08:06.735 Test: test_get_log_pages ...passed 00:08:06.735 Test: test_set_feature_cmd ...passed 00:08:06.735 Test: test_set_feature_ns_cmd ...passed 00:08:06.735 Test: test_get_feature_cmd ...passed 00:08:06.735 Test: test_get_feature_ns_cmd ...passed 00:08:06.735 Test: test_abort_cmd ...passed 00:08:06.735 Test: test_set_host_id_cmds ...[2024-04-27 00:27:40.149693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:06.735 passed 00:08:06.735 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:06.735 Test: test_io_raw_cmd ...passed 00:08:06.735 Test: test_io_raw_cmd_with_md ...passed 00:08:06.735 Test: test_namespace_attach ...passed 00:08:06.735 Test: test_namespace_detach ...passed 00:08:06.735 Test: test_namespace_create ...passed 00:08:06.735 Test: test_namespace_delete ...passed 00:08:06.735 Test: test_doorbell_buffer_config ...passed 00:08:06.735 Test: test_format_nvme ...passed 00:08:06.735 Test: test_fw_commit ...passed 00:08:06.735 Test: test_fw_image_download ...passed 00:08:06.735 Test: test_sanitize ...passed 00:08:06.735 Test: test_directive ...passed 00:08:06.735 Test: test_nvme_request_add_abort ...passed 00:08:06.735 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:06.735 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:06.735 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:06.735 00:08:06.735 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.735 suites 1 1 n/a 0 0 00:08:06.735 tests 24 24 24 0 0 00:08:06.735 asserts 198 198 198 0 n/a 00:08:06.735 00:08:06.735 Elapsed time = 0.001 seconds 00:08:06.735 00:27:40 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:06.735 00:08:06.735 00:08:06.735 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.735 http://cunit.sourceforge.net/ 00:08:06.735 00:08:06.735 00:08:06.735 Suite: nvme_ctrlr_cmd 00:08:06.735 Test: test_geometry_cmd ...passed 00:08:06.735 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:06.735 00:08:06.735 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.735 suites 1 1 n/a 0 0 00:08:06.735 tests 2 2 2 0 0 00:08:06.735 asserts 7 7 7 0 n/a 00:08:06.735 00:08:06.735 Elapsed time = 0.000 seconds 00:08:06.735 00:27:40 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:06.735 00:08:06.735 00:08:06.735 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.736 http://cunit.sourceforge.net/ 00:08:06.736 00:08:06.736 00:08:06.736 Suite: nvme 00:08:06.736 Test: test_nvme_ns_construct ...passed 00:08:06.736 Test: test_nvme_ns_uuid ...passed 00:08:06.736 Test: test_nvme_ns_csi ...passed 00:08:06.736 Test: test_nvme_ns_data ...passed 00:08:06.736 Test: test_nvme_ns_set_identify_data ...passed 00:08:06.736 Test: test_spdk_nvme_ns_get_values ...passed 00:08:06.736 Test: test_spdk_nvme_ns_is_active ...passed 00:08:06.736 Test: spdk_nvme_ns_supports ...passed 00:08:06.736 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:06.736 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:06.736 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:06.736 Test: test_nvme_ns_find_id_desc ...passed 00:08:06.736 00:08:06.736 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.736 suites 1 1 n/a 0 0 00:08:06.736 tests 12 12 12 0 0 00:08:06.736 asserts 83 83 83 0 n/a 00:08:06.736 00:08:06.736 Elapsed time = 0.000 seconds 00:08:06.736 00:27:40 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:06.736 00:08:06.736 00:08:06.736 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.736 http://cunit.sourceforge.net/ 00:08:06.736 00:08:06.736 00:08:06.736 Suite: nvme_ns_cmd 00:08:06.736 Test: split_test ...passed 00:08:06.736 Test: split_test2 ...passed 00:08:06.736 Test: split_test3 ...passed 00:08:06.736 Test: split_test4 ...passed 00:08:06.736 Test: test_nvme_ns_cmd_flush ...passed 00:08:06.736 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:06.736 Test: test_nvme_ns_cmd_copy ...passed 00:08:06.736 Test: test_io_flags ...[2024-04-27 00:27:40.229041] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:06.736 passed 00:08:06.736 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:06.736 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:06.736 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:06.736 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:06.736 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:06.736 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:06.736 Test: test_cmd_child_request ...passed 00:08:06.736 Test: test_nvme_ns_cmd_readv ...passed 00:08:06.736 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:06.736 Test: test_nvme_ns_cmd_writev ...[2024-04-27 00:27:40.231337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:06.736 passed 00:08:06.736 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:06.736 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:06.736 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:06.736 Test: test_nvme_ns_cmd_comparev ...passed 00:08:06.736 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:06.736 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:06.736 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:06.736 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:06.736 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:06.736 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-04-27 00:27:40.234066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:06.736 passed 00:08:06.736 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-04-27 00:27:40.234241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:06.736 passed 00:08:06.736 Test: test_nvme_ns_cmd_verify ...passed 00:08:06.736 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:06.736 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:06.736 00:08:06.736 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.736 suites 1 1 n/a 0 0 00:08:06.736 tests 32 32 32 0 0 00:08:06.736 asserts 550 550 550 0 n/a 00:08:06.736 00:08:06.736 Elapsed time = 0.007 seconds 00:08:06.736 00:27:40 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:06.736 00:08:06.736 00:08:06.736 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.736 http://cunit.sourceforge.net/ 00:08:06.736 00:08:06.736 00:08:06.736 Suite: nvme_ns_cmd 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:06.736 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:06.736 00:08:06.736 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.736 suites 1 1 n/a 0 0 00:08:06.736 tests 12 12 12 0 0 00:08:06.736 asserts 123 123 123 0 n/a 00:08:06.736 00:08:06.736 Elapsed time = 0.001 seconds 00:08:06.736 00:27:40 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:06.736 00:08:06.736 00:08:06.736 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.736 http://cunit.sourceforge.net/ 00:08:06.736 00:08:06.736 00:08:06.736 Suite: nvme_qpair 00:08:06.736 Test: test3 ...passed 00:08:06.736 Test: test_ctrlr_failed ...passed 00:08:06.736 Test: struct_packing ...passed 00:08:06.736 Test: test_nvme_qpair_process_completions ...[2024-04-27 00:27:40.291995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:06.736 [2024-04-27 00:27:40.292384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:06.736 [2024-04-27 00:27:40.292488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:06.736 passed 00:08:06.736 Test: test_nvme_completion_is_retry ...passed 00:08:06.736 Test: test_get_status_string ...passed 00:08:06.736 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-04-27 00:27:40.292589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:06.736 passed 00:08:06.736 Test: test_nvme_qpair_submit_request ...passed 00:08:06.736 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:06.736 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:06.736 Test: test_nvme_qpair_init_deinit ...[2024-04-27 00:27:40.293108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:06.736 passed 00:08:06.736 Test: test_nvme_get_sgl_print_info ...passed 00:08:06.736 00:08:06.736 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.736 suites 1 1 n/a 0 0 00:08:06.736 tests 12 12 12 0 0 00:08:06.736 asserts 154 154 154 0 n/a 00:08:06.736 00:08:06.736 Elapsed time = 0.001 seconds 00:08:06.736 00:27:40 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:06.996 00:08:06.996 00:08:06.996 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.996 http://cunit.sourceforge.net/ 00:08:06.996 00:08:06.996 00:08:06.996 Suite: nvme_pcie 00:08:06.996 Test: test_prp_list_append ...[2024-04-27 00:27:40.322510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:06.996 [2024-04-27 00:27:40.322832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:06.996 [2024-04-27 00:27:40.322876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:06.996 [2024-04-27 00:27:40.323093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:06.996 passed 00:08:06.996 Test: test_nvme_pcie_hotplug_monitor ...[2024-04-27 00:27:40.323190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:06.996 passed 00:08:06.996 Test: test_shadow_doorbell_update ...passed 00:08:06.996 Test: test_build_contig_hw_sgl_request ...passed 00:08:06.996 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:06.996 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:06.996 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:06.996 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:08:06.996 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:06.996 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:06.996 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-04-27 00:27:40.323348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:06.996 [2024-04-27 00:27:40.323459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:06.996 passed 00:08:06.996 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:08:06.996 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:08:06.996 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-04-27 00:27:40.323541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:06.996 [2024-04-27 00:27:40.323637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:06.996 [2024-04-27 00:27:40.323688] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:06.996 passed 00:08:06.996 00:08:06.996 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.996 suites 1 1 n/a 0 0 00:08:06.996 tests 14 14 14 0 0 00:08:06.996 asserts 235 235 235 0 n/a 00:08:06.996 00:08:06.996 Elapsed time = 0.001 seconds 00:08:06.996 00:27:40 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:06.996 00:08:06.996 00:08:06.996 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.996 http://cunit.sourceforge.net/ 00:08:06.996 00:08:06.996 00:08:06.996 Suite: nvme_ns_cmd 00:08:06.996 Test: nvme_poll_group_create_test ...passed 00:08:06.996 Test: nvme_poll_group_add_remove_test ...passed 00:08:06.996 Test: nvme_poll_group_process_completions ...passed 00:08:06.996 Test: nvme_poll_group_destroy_test ...passed 00:08:06.996 Test: nvme_poll_group_get_free_stats ...passed 00:08:06.996 00:08:06.996 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.996 suites 1 1 n/a 0 0 00:08:06.996 tests 5 5 5 0 0 00:08:06.996 asserts 75 75 75 0 n/a 00:08:06.996 00:08:06.996 Elapsed time = 0.001 seconds 00:08:06.996 00:27:40 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:06.996 00:08:06.996 00:08:06.996 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.996 http://cunit.sourceforge.net/ 00:08:06.996 00:08:06.996 00:08:06.996 Suite: nvme_quirks 00:08:06.996 Test: test_nvme_quirks_striping ...passed 00:08:06.996 00:08:06.996 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.996 suites 1 1 n/a 0 0 00:08:06.996 tests 1 1 1 0 0 00:08:06.996 asserts 5 5 5 0 n/a 00:08:06.996 00:08:06.996 Elapsed time = 0.000 seconds 00:08:06.997 00:27:40 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:06.997 00:08:06.997 00:08:06.997 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.997 http://cunit.sourceforge.net/ 00:08:06.997 00:08:06.997 00:08:06.997 Suite: nvme_tcp 00:08:06.997 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:06.997 Test: test_nvme_tcp_build_iovs ...passed 00:08:06.997 Test: test_nvme_tcp_build_sgl_request ...passed 00:08:06.997 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...[2024-04-27 00:27:40.404878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffcc29495e0, and the iovcnt=16, remaining_size=28672 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:06.997 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:06.997 Test: test_nvme_tcp_req_get ...passed 00:08:06.997 Test: test_nvme_tcp_req_init ...passed 00:08:06.997 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:06.997 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:06.997 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:08:06.997 Test: test_nvme_tcp_alloc_reqs ...[2024-04-27 00:27:40.405464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294b310 is same with the state(6) to be set 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-04-27 00:27:40.405773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a4a0 is same with the state(5) to be set 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-27 00:27:40.405833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffcc294aff0 00:08:06.997 [2024-04-27 00:27:40.405879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1223:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:06.997 [2024-04-27 00:27:40.405952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.406001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1174:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:06.997 [2024-04-27 00:27:40.406088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.406138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:06.997 [2024-04-27 00:27:40.406169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.406229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.406267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.406340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.406421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-27 00:27:40.406473] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a960 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.406614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:06.997 [2024-04-27 00:27:40.406657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:06.997 [2024-04-27 00:27:40.406864] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:06.997 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:08:06.997 Test: test_nvme_tcp_icresp_handle ...[2024-04-27 00:27:40.406975] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcc294ab30): PDU Sequence Error 00:08:06.997 [2024-04-27 00:27:40.407035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1564:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:06.997 [2024-04-27 00:27:40.407075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1571:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:06.997 [2024-04-27 00:27:40.407112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a4b0 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.407149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1580:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:06.997 [2024-04-27 00:27:40.407185] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a4b0 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.407240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc294a4b0 is same with the state(0) to be set 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:08:06.997 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:08:06.997 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-04-27 00:27:40.407298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1338:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcc294aff0): PDU Sequence Error 00:08:06.997 [2024-04-27 00:27:40.407382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1641:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffcc2949780 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-04-27 00:27:40.407550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffcc2948e00, errno=0, rc=0 00:08:06.997 [2024-04-27 00:27:40.407602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc2948e00 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.407663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcc2948e00 is same with the state(5) to be set 00:08:06.997 [2024-04-27 00:27:40.407713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcc2948e00 (0): Success 00:08:06.997 [2024-04-27 00:27:40.407752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcc2948e00 (0): Success 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-27 00:27:40.511825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:06.997 [2024-04-27 00:27:40.511952] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:06.997 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:08:06.997 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-27 00:27:40.512153] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:06.997 [2024-04-27 00:27:40.512199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2952:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:06.997 [2024-04-27 00:27:40.512384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2504:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:06.997 [2024-04-27 00:27:40.512433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:06.997 [2024-04-27 00:27:40.512532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2321:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:06.997 [2024-04-27 00:27:40.512603] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:06.997 [2024-04-27 00:27:40.512690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000000c40 with addr=192.168.1.78, port=23 00:08:06.997 passed 00:08:06.997 Test: test_nvme_tcp_qpair_submit_request ...[2024-04-27 00:27:40.512744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:06.997 [2024-04-27 00:27:40.512867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 824:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:08:06.997 [2024-04-27 00:27:40.512909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1017:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:06.997 passed 00:08:06.997 00:08:06.997 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.997 suites 1 1 n/a 0 0 00:08:06.997 tests 27 27 27 0 0 00:08:06.997 asserts 624 624 624 0 n/a 00:08:06.997 00:08:06.997 Elapsed time = 0.106 seconds 00:08:06.997 00:27:40 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:06.997 00:08:06.997 00:08:06.997 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.997 http://cunit.sourceforge.net/ 00:08:06.997 00:08:06.997 00:08:06.997 Suite: nvme_transport 00:08:06.997 Test: test_nvme_get_transport ...passed 00:08:06.997 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:06.997 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:06.997 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:06.997 Test: test_ctrlr_get_memory_domains ...passed 00:08:06.997 00:08:06.997 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.997 suites 1 1 n/a 0 0 00:08:06.997 tests 5 5 5 0 0 00:08:06.997 asserts 28 28 28 0 n/a 00:08:06.997 00:08:06.997 Elapsed time = 0.000 seconds 00:08:06.997 00:27:40 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:06.997 00:08:06.997 00:08:06.997 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.997 http://cunit.sourceforge.net/ 00:08:06.997 00:08:06.997 00:08:06.997 Suite: nvme_io_msg 00:08:06.997 Test: test_nvme_io_msg_send ...passed 00:08:06.997 Test: test_nvme_io_msg_process ...passed 00:08:06.997 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:06.997 00:08:06.997 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.997 suites 1 1 n/a 0 0 00:08:06.997 tests 3 3 3 0 0 00:08:06.997 asserts 56 56 56 0 n/a 00:08:06.997 00:08:06.997 Elapsed time = 0.000 seconds 00:08:07.257 00:27:40 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:07.257 00:08:07.257 00:08:07.257 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.257 http://cunit.sourceforge.net/ 00:08:07.257 00:08:07.257 00:08:07.257 Suite: nvme_pcie_common 00:08:07.257 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-27 00:27:40.607101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:07.257 passed 00:08:07.257 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:07.257 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:07.257 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-27 00:27:40.608459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:07.257 [2024-04-27 00:27:40.608894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:07.257 [2024-04-27 00:27:40.608946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:07.257 passed 00:08:07.257 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:08:07.257 Test: test_nvme_pcie_poll_group_get_stats ...[2024-04-27 00:27:40.609584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:07.257 [2024-04-27 00:27:40.609644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:07.257 passed 00:08:07.257 00:08:07.257 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.257 suites 1 1 n/a 0 0 00:08:07.257 tests 6 6 6 0 0 00:08:07.257 asserts 148 148 148 0 n/a 00:08:07.257 00:08:07.257 Elapsed time = 0.003 seconds 00:08:07.257 00:27:40 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:07.257 00:08:07.257 00:08:07.257 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.257 http://cunit.sourceforge.net/ 00:08:07.257 00:08:07.257 00:08:07.257 Suite: nvme_fabric 00:08:07.257 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:07.257 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:07.257 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:07.257 Test: test_nvme_fabric_discover_probe ...passed 00:08:07.257 Test: test_nvme_fabric_qpair_connect ...[2024-04-27 00:27:40.638005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:07.257 passed 00:08:07.257 00:08:07.257 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.257 suites 1 1 n/a 0 0 00:08:07.257 tests 5 5 5 0 0 00:08:07.258 asserts 60 60 60 0 n/a 00:08:07.258 00:08:07.258 Elapsed time = 0.001 seconds 00:08:07.258 00:27:40 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:07.258 00:08:07.258 00:08:07.258 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.258 http://cunit.sourceforge.net/ 00:08:07.258 00:08:07.258 00:08:07.258 Suite: nvme_opal 00:08:07.258 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:07.258 Test: test_opal_add_short_atom_header ...[2024-04-27 00:27:40.668586] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:07.258 passed 00:08:07.258 00:08:07.258 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.258 suites 1 1 n/a 0 0 00:08:07.258 tests 2 2 2 0 0 00:08:07.258 asserts 22 22 22 0 n/a 00:08:07.258 00:08:07.258 Elapsed time = 0.000 seconds 00:08:07.258 00:08:07.258 real 0m1.103s 00:08:07.258 user 0m0.579s 00:08:07.258 sys 0m0.378s 00:08:07.258 00:27:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:07.258 00:27:40 -- common/autotest_common.sh@10 -- # set +x 00:08:07.258 ************************************ 00:08:07.258 END TEST unittest_nvme 00:08:07.258 ************************************ 00:08:07.258 00:27:40 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:07.258 00:27:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:07.258 00:27:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.258 00:27:40 -- common/autotest_common.sh@10 -- # set +x 00:08:07.258 ************************************ 00:08:07.258 START TEST unittest_log 00:08:07.258 ************************************ 00:08:07.258 00:27:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:07.258 00:08:07.258 00:08:07.258 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.258 http://cunit.sourceforge.net/ 00:08:07.258 00:08:07.258 00:08:07.258 Suite: log 00:08:07.258 Test: log_test ...[2024-04-27 00:27:40.776825] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:08:07.258 [2024-04-27 00:27:40.777119] log_ut.c: 57:log_test: *DEBUG*: log test 00:08:07.258 passed 00:08:07.258 Test: deprecation ...log dump test: 00:08:07.258 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:07.258 spdk dump test: 00:08:07.258 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:07.258 spdk dump test: 00:08:07.258 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:07.258 00000010 65 20 63 68 61 72 73 e chars 00:08:08.304 passed 00:08:08.304 00:08:08.304 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.304 suites 1 1 n/a 0 0 00:08:08.304 tests 2 2 2 0 0 00:08:08.304 asserts 73 73 73 0 n/a 00:08:08.304 00:08:08.304 Elapsed time = 0.001 seconds 00:08:08.304 00:08:08.304 real 0m1.032s 00:08:08.304 user 0m0.024s 00:08:08.304 sys 0m0.008s 00:08:08.304 00:27:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.304 00:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.304 ************************************ 00:08:08.304 END TEST unittest_log 00:08:08.304 ************************************ 00:08:08.304 00:27:41 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:08.304 00:27:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.304 00:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.304 00:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.304 ************************************ 00:08:08.304 START TEST unittest_lvol 00:08:08.304 ************************************ 00:08:08.304 00:27:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:08.564 00:08:08.564 00:08:08.564 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.564 http://cunit.sourceforge.net/ 00:08:08.564 00:08:08.564 00:08:08.564 Suite: lvol 00:08:08.564 Test: lvs_init_unload_success ...[2024-04-27 00:27:41.895204] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:08.564 passed 00:08:08.564 Test: lvs_init_destroy_success ...[2024-04-27 00:27:41.895618] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:08.564 passed 00:08:08.564 Test: lvs_init_opts_success ...passed 00:08:08.564 Test: lvs_unload_lvs_is_null_fail ...passed 00:08:08.564 Test: lvs_names ...[2024-04-27 00:27:41.895817] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:08.564 [2024-04-27 00:27:41.895869] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:08.564 [2024-04-27 00:27:41.895914] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:08.564 [2024-04-27 00:27:41.896044] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:08.564 passed 00:08:08.564 Test: lvol_create_destroy_success ...passed 00:08:08.564 Test: lvol_create_fail ...[2024-04-27 00:27:41.896479] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:08.564 [2024-04-27 00:27:41.896597] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:08.564 passed 00:08:08.564 Test: lvol_destroy_fail ...[2024-04-27 00:27:41.896865] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:08.564 passed 00:08:08.564 Test: lvol_close ...[2024-04-27 00:27:41.897024] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:08.564 passed 00:08:08.564 Test: lvol_resize ...[2024-04-27 00:27:41.897080] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:08.564 passed 00:08:08.564 Test: lvol_set_read_only ...passed 00:08:08.564 Test: test_lvs_load ...[2024-04-27 00:27:41.897730] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:08.564 passed 00:08:08.564 Test: lvols_load ...[2024-04-27 00:27:41.897783] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:08.564 [2024-04-27 00:27:41.897986] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:08.564 passed 00:08:08.564 Test: lvol_open ...[2024-04-27 00:27:41.898098] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:08.564 passed 00:08:08.564 Test: lvol_snapshot ...passed 00:08:08.564 Test: lvol_snapshot_fail ...[2024-04-27 00:27:41.898763] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:08.564 passed 00:08:08.564 Test: lvol_clone ...passed 00:08:08.564 Test: lvol_clone_fail ...[2024-04-27 00:27:41.899204] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:08.564 passed 00:08:08.564 Test: lvol_iter_clones ...passed 00:08:08.564 Test: lvol_refcnt ...[2024-04-27 00:27:41.899674] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 63450e91-a101-4147-b44f-4ffe858ba0c3 because it is still open 00:08:08.564 passed 00:08:08.564 Test: lvol_names ...[2024-04-27 00:27:41.899847] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:08.564 [2024-04-27 00:27:41.899930] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:08.564 [2024-04-27 00:27:41.900105] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:08.564 passed 00:08:08.564 Test: lvol_create_thin_provisioned ...passed 00:08:08.564 Test: lvol_rename ...[2024-04-27 00:27:41.900435] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:08.564 [2024-04-27 00:27:41.900525] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:08.564 passed 00:08:08.564 Test: lvs_rename ...[2024-04-27 00:27:41.900729] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:08.564 passed 00:08:08.564 Test: lvol_inflate ...[2024-04-27 00:27:41.900900] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:08.564 passed 00:08:08.564 Test: lvol_decouple_parent ...[2024-04-27 00:27:41.901142] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:08.564 passed 00:08:08.564 Test: lvol_get_xattr ...passed 00:08:08.564 Test: lvol_esnap_reload ...passed 00:08:08.564 Test: lvol_esnap_create_bad_args ...[2024-04-27 00:27:41.901495] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:08.564 [2024-04-27 00:27:41.901550] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:08.564 [2024-04-27 00:27:41.901595] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:08.564 [2024-04-27 00:27:41.901701] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:08.564 [2024-04-27 00:27:41.901817] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:08.564 passed 00:08:08.564 Test: lvol_esnap_create_delete ...passed 00:08:08.564 Test: lvol_esnap_load_esnaps ...passed 00:08:08.564 Test: lvol_esnap_missing ...[2024-04-27 00:27:41.902098] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:08.564 [2024-04-27 00:27:41.902203] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:08.564 [2024-04-27 00:27:41.902255] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:08.564 passed 00:08:08.564 Test: lvol_esnap_hotplug ... 00:08:08.564 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:08.564 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:08.564 [2024-04-27 00:27:41.902778] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 40ab1639-7673-4980-a7c3-7b4a0b66b5fb: failed to create esnap bs_dev: error -12 00:08:08.564 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:08.564 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:08.564 [2024-04-27 00:27:41.902957] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 8d918b47-bd7d-42d0-8b6a-da6de908f52c: failed to create esnap bs_dev: error -12 00:08:08.564 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:08.564 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:08.564 [2024-04-27 00:27:41.903063] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol cadfe992-0a1b-4298-981e-313c9041ba54: failed to create esnap bs_dev: error -12 00:08:08.564 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:08.564 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:08.564 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:08.564 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:08.564 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:08.564 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:08.564 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:08.564 passed 00:08:08.564 Test: lvol_get_by ...passed 00:08:08.564 00:08:08.564 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.564 suites 1 1 n/a 0 0 00:08:08.564 tests 34 34 34 0 0 00:08:08.564 asserts 1439 1439 1439 0 n/a 00:08:08.564 00:08:08.564 Elapsed time = 0.009 seconds 00:08:08.564 00:08:08.564 real 0m0.041s 00:08:08.564 user 0m0.012s 00:08:08.564 sys 0m0.029s 00:08:08.565 00:27:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.565 00:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 ************************************ 00:08:08.565 END TEST unittest_lvol 00:08:08.565 ************************************ 00:08:08.565 00:27:41 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:08.565 00:27:41 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:08.565 00:27:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.565 00:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.565 00:27:41 -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 ************************************ 00:08:08.565 START TEST unittest_nvme_rdma 00:08:08.565 ************************************ 00:08:08.565 00:27:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:08.565 00:08:08.565 00:08:08.565 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.565 http://cunit.sourceforge.net/ 00:08:08.565 00:08:08.565 00:08:08.565 Suite: nvme_rdma 00:08:08.565 Test: test_nvme_rdma_build_sgl_request ...[2024-04-27 00:27:42.017552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:08.565 [2024-04-27 00:27:42.017903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:08.565 Test: test_nvme_rdma_build_contig_request ...[2024-04-27 00:27:42.018007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:08.565 [2024-04-27 00:27:42.018096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:08.565 Test: test_nvme_rdma_create_reqs ...[2024-04-27 00:27:42.018216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_create_rsps ...[2024-04-27 00:27:42.018581] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-04-27 00:27:42.018792] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_poller_create ...[2024-04-27 00:27:42.018870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:08:08.565 Test: test_nvme_rdma_ctrlr_construct ...[2024-04-27 00:27:42.019083] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:08.565 Test: test_nvme_rdma_req_init ...passed 00:08:08.565 Test: test_nvme_rdma_validate_cm_event ...passed 00:08:08.565 Test: test_nvme_rdma_qpair_init ...[2024-04-27 00:27:42.019404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:08.565 [2024-04-27 00:27:42.019458] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:08.565 Test: test_nvme_rdma_memory_domain ...[2024-04-27 00:27:42.019661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:08.565 passed 00:08:08.565 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:08.565 Test: test_rdma_get_memory_translation ...[2024-04-27 00:27:42.019779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:08.565 passed 00:08:08.565 Test: test_get_rdma_qpair_from_wc ...passed 00:08:08.565 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:08.565 Test: test_nvme_rdma_poll_group_get_stats ...[2024-04-27 00:27:42.019845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:08.565 [2024-04-27 00:27:42.019935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:08.565 [2024-04-27 00:27:42.019978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:08.565 passed 00:08:08.565 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-27 00:27:42.020123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:08.565 [2024-04-27 00:27:42.020176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:08.565 [2024-04-27 00:27:42.020218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe6cb90340 on poll group 0x60c000000040 00:08:08.565 [2024-04-27 00:27:42.020282] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:08.565 [2024-04-27 00:27:42.020330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:08.565 [2024-04-27 00:27:42.020378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe6cb90340 on poll group 0x60c000000040 00:08:08.565 [2024-04-27 00:27:42.020456] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:08.565 passed 00:08:08.565 00:08:08.565 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.565 suites 1 1 n/a 0 0 00:08:08.565 tests 22 22 22 0 0 00:08:08.565 asserts 412 412 412 0 n/a 00:08:08.565 00:08:08.565 Elapsed time = 0.003 seconds 00:08:08.565 00:08:08.565 real 0m0.029s 00:08:08.565 user 0m0.020s 00:08:08.565 sys 0m0.010s 00:08:08.565 00:27:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.565 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 ************************************ 00:08:08.565 END TEST unittest_nvme_rdma 00:08:08.565 ************************************ 00:08:08.565 00:27:42 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:08.565 00:27:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.565 00:27:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.565 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 ************************************ 00:08:08.565 START TEST unittest_nvmf_transport 00:08:08.565 ************************************ 00:08:08.565 00:27:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:08.565 00:08:08.565 00:08:08.565 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.565 http://cunit.sourceforge.net/ 00:08:08.565 00:08:08.565 00:08:08.565 Suite: nvmf 00:08:08.565 Test: test_spdk_nvmf_transport_create ...[2024-04-27 00:27:42.129881] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:08.565 [2024-04-27 00:27:42.130302] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:08.565 [2024-04-27 00:27:42.130506] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:08.565 [2024-04-27 00:27:42.130678] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:08.565 passed 00:08:08.565 Test: test_nvmf_transport_poll_group_create ...passed 00:08:08.565 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-27 00:27:42.131013] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:08.565 [2024-04-27 00:27:42.131149] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:08.565 passed 00:08:08.565 Test: test_spdk_nvmf_transport_listen_ext ...[2024-04-27 00:27:42.131226] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:08.565 passed 00:08:08.565 00:08:08.565 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.565 suites 1 1 n/a 0 0 00:08:08.565 tests 4 4 4 0 0 00:08:08.565 asserts 49 49 49 0 n/a 00:08:08.565 00:08:08.565 Elapsed time = 0.002 seconds 00:08:08.565 00:08:08.565 real 0m0.032s 00:08:08.565 user 0m0.012s 00:08:08.565 sys 0m0.020s 00:08:08.565 00:27:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.565 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 ************************************ 00:08:08.565 END TEST unittest_nvmf_transport 00:08:08.565 ************************************ 00:08:08.824 00:27:42 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:08.824 00:27:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.824 00:27:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.824 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:08.824 ************************************ 00:08:08.824 START TEST unittest_rdma 00:08:08.824 ************************************ 00:08:08.824 00:27:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:08.824 00:08:08.824 00:08:08.824 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.824 http://cunit.sourceforge.net/ 00:08:08.824 00:08:08.824 00:08:08.824 Suite: rdma_common 00:08:08.824 Test: test_spdk_rdma_pd ...[2024-04-27 00:27:42.243566] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:08.824 [2024-04-27 00:27:42.244049] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:08.824 passed 00:08:08.824 00:08:08.824 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.824 suites 1 1 n/a 0 0 00:08:08.824 tests 1 1 1 0 0 00:08:08.824 asserts 31 31 31 0 n/a 00:08:08.824 00:08:08.824 Elapsed time = 0.001 seconds 00:08:08.824 00:08:08.824 real 0m0.033s 00:08:08.824 user 0m0.028s 00:08:08.824 sys 0m0.004s 00:08:08.824 00:27:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.824 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:08.824 ************************************ 00:08:08.824 END TEST unittest_rdma 00:08:08.824 ************************************ 00:08:08.824 00:27:42 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:08.824 00:27:42 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:08.824 00:27:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.824 00:27:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.824 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:08.824 ************************************ 00:08:08.824 START TEST unittest_nvme_cuse 00:08:08.824 ************************************ 00:08:08.824 00:27:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:08.824 00:08:08.824 00:08:08.824 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.825 http://cunit.sourceforge.net/ 00:08:08.825 00:08:08.825 00:08:08.825 Suite: nvme_cuse 00:08:08.825 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:08.825 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:08.825 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:08.825 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:08.825 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:08.825 Test: test_cuse_nvme_submit_io ...[2024-04-27 00:27:42.371362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:08.825 passed 00:08:08.825 Test: test_cuse_nvme_reset ...[2024-04-27 00:27:42.371660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:08.825 passed 00:08:08.825 Test: test_nvme_cuse_stop ...passed 00:08:08.825 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:08.825 00:08:08.825 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.825 suites 1 1 n/a 0 0 00:08:08.825 tests 9 9 9 0 0 00:08:08.825 asserts 118 118 118 0 n/a 00:08:08.825 00:08:08.825 Elapsed time = 0.003 seconds 00:08:08.825 00:08:08.825 real 0m0.036s 00:08:08.825 user 0m0.029s 00:08:08.825 sys 0m0.008s 00:08:08.825 00:27:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.825 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:08.825 ************************************ 00:08:08.825 END TEST unittest_nvme_cuse 00:08:08.825 ************************************ 00:08:09.084 00:27:42 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:08:09.084 00:27:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.084 00:27:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.084 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.084 ************************************ 00:08:09.084 START TEST unittest_nvmf 00:08:09.085 ************************************ 00:08:09.085 00:27:42 -- common/autotest_common.sh@1111 -- # unittest_nvmf 00:08:09.085 00:27:42 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:09.085 00:08:09.085 00:08:09.085 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.085 http://cunit.sourceforge.net/ 00:08:09.085 00:08:09.085 00:08:09.085 Suite: nvmf 00:08:09.085 Test: test_get_log_page ...[2024-04-27 00:27:42.494790] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2562:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:09.085 passed 00:08:09.085 Test: test_process_fabrics_cmd ...passed 00:08:09.085 Test: test_connect ...[2024-04-27 00:27:42.496196] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 956:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:09.085 [2024-04-27 00:27:42.496451] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:09.085 [2024-04-27 00:27:42.496619] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 995:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:09.085 [2024-04-27 00:27:42.496810] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:09.085 [2024-04-27 00:27:42.497098] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 830:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:09.085 [2024-04-27 00:27:42.497266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 837:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:09.085 [2024-04-27 00:27:42.497513] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 843:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:09.085 [2024-04-27 00:27:42.497709] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 870:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:09.085 [2024-04-27 00:27:42.497957] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:09.085 [2024-04-27 00:27:42.498179] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:09.085 [2024-04-27 00:27:42.498677] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 629:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:09.085 [2024-04-27 00:27:42.498910] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 635:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:09.085 [2024-04-27 00:27:42.499134] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 642:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:09.085 [2024-04-27 00:27:42.499344] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 665:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:09.085 [2024-04-27 00:27:42.499594] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 242:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:08:09.085 [2024-04-27 00:27:42.499858] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 750:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:08:09.085 [2024-04-27 00:27:42.500085] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 750:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:09.085 passed 00:08:09.085 Test: test_get_ns_id_desc_list ...passed 00:08:09.085 Test: test_identify_ns ...[2024-04-27 00:27:42.500896] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:09.085 [2024-04-27 00:27:42.501334] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:09.085 [2024-04-27 00:27:42.501593] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:09.085 passed 00:08:09.085 Test: test_identify_ns_iocs_specific ...[2024-04-27 00:27:42.502089] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:09.085 [2024-04-27 00:27:42.502527] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:09.085 passed 00:08:09.085 Test: test_reservation_write_exclusive ...passed 00:08:09.085 Test: test_reservation_exclusive_access ...passed 00:08:09.085 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:09.085 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:09.085 Test: test_reservation_notification_log_page ...passed 00:08:09.085 Test: test_get_dif_ctx ...passed 00:08:09.085 Test: test_set_get_features ...[2024-04-27 00:27:42.504468] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1592:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:09.085 [2024-04-27 00:27:42.504657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1592:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:09.085 [2024-04-27 00:27:42.504844] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1603:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:09.085 [2024-04-27 00:27:42.505001] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1679:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:09.085 passed 00:08:09.085 Test: test_identify_ctrlr ...passed 00:08:09.085 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:09.085 Test: test_custom_admin_cmd ...passed 00:08:09.085 Test: test_fused_compare_and_write ...[2024-04-27 00:27:42.506218] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4164:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:09.085 [2024-04-27 00:27:42.506459] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4153:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:09.085 [2024-04-27 00:27:42.506642] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4171:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:09.085 passed 00:08:09.085 Test: test_multi_async_event_reqs ...passed 00:08:09.085 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:09.085 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:09.085 Test: test_multi_async_events ...passed 00:08:09.085 Test: test_rae ...passed 00:08:09.085 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:09.085 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:09.085 Test: test_spdk_nvmf_request_zcopy_start ...[2024-04-27 00:27:42.508266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4291:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:08:09.085 passed 00:08:09.085 Test: test_zcopy_read ...passed 00:08:09.085 Test: test_zcopy_write ...passed 00:08:09.085 Test: test_nvmf_property_set ...passed 00:08:09.085 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-04-27 00:27:42.509379] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1890:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:09.085 [2024-04-27 00:27:42.509549] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1890:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:09.085 passed 00:08:09.085 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-04-27 00:27:42.509892] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1913:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:09.085 [2024-04-27 00:27:42.510058] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1919:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:09.085 [2024-04-27 00:27:42.510220] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1931:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:09.085 passed 00:08:09.085 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:08:09.085 00:08:09.085 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.085 suites 1 1 n/a 0 0 00:08:09.085 tests 31 31 31 0 0 00:08:09.085 asserts 951 951 951 0 n/a 00:08:09.085 00:08:09.085 Elapsed time = 0.008 seconds 00:08:09.085 00:27:42 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:09.085 00:08:09.085 00:08:09.085 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.085 http://cunit.sourceforge.net/ 00:08:09.085 00:08:09.085 00:08:09.085 Suite: nvmf 00:08:09.085 Test: test_get_rw_params ...passed 00:08:09.085 Test: test_get_rw_ext_params ...passed 00:08:09.085 Test: test_lba_in_range ...passed 00:08:09.085 Test: test_get_dif_ctx ...passed 00:08:09.085 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:09.085 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-27 00:27:42.548061] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:09.085 [2024-04-27 00:27:42.548382] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:09.085 [2024-04-27 00:27:42.548478] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:09.085 passed 00:08:09.085 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-04-27 00:27:42.548552] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:09.085 [2024-04-27 00:27:42.548636] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:09.085 passed 00:08:09.085 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-04-27 00:27:42.548742] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:09.085 [2024-04-27 00:27:42.548783] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:09.085 [2024-04-27 00:27:42.548847] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:09.085 [2024-04-27 00:27:42.548885] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:09.085 passed 00:08:09.085 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:09.085 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:09.085 00:08:09.085 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.085 suites 1 1 n/a 0 0 00:08:09.085 tests 10 10 10 0 0 00:08:09.085 asserts 159 159 159 0 n/a 00:08:09.085 00:08:09.085 Elapsed time = 0.001 seconds 00:08:09.085 00:27:42 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:09.085 00:08:09.085 00:08:09.085 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.085 http://cunit.sourceforge.net/ 00:08:09.085 00:08:09.085 00:08:09.085 Suite: nvmf 00:08:09.085 Test: test_discovery_log ...passed 00:08:09.085 Test: test_discovery_log_with_filters ...passed 00:08:09.085 00:08:09.085 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.085 suites 1 1 n/a 0 0 00:08:09.085 tests 2 2 2 0 0 00:08:09.085 asserts 238 238 238 0 n/a 00:08:09.085 00:08:09.085 Elapsed time = 0.003 seconds 00:08:09.085 00:27:42 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:09.085 00:08:09.086 00:08:09.086 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.086 http://cunit.sourceforge.net/ 00:08:09.086 00:08:09.086 00:08:09.086 Suite: nvmf 00:08:09.086 Test: nvmf_test_create_subsystem ...[2024-04-27 00:27:42.617539] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:09.086 [2024-04-27 00:27:42.617868] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:09.086 [2024-04-27 00:27:42.618043] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:09.086 [2024-04-27 00:27:42.618145] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:09.086 [2024-04-27 00:27:42.618196] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:09.086 [2024-04-27 00:27:42.618242] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:09.086 [2024-04-27 00:27:42.618289] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:09.086 [2024-04-27 00:27:42.618345] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:09.086 [2024-04-27 00:27:42.618483] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:09.086 [2024-04-27 00:27:42.618528] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:09.086 [2024-04-27 00:27:42.618564] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:09.086 [2024-04-27 00:27:42.618607] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:09.086 [2024-04-27 00:27:42.618732] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:09.086 [2024-04-27 00:27:42.618857] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:09.086 [2024-04-27 00:27:42.618976] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:09.086 [2024-04-27 00:27:42.619045] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:09.086 [2024-04-27 00:27:42.619140] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:09.086 [2024-04-27 00:27:42.619183] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:09.086 [2024-04-27 00:27:42.619225] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:09.086 [2024-04-27 00:27:42.619287] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:09.086 [2024-04-27 00:27:42.619330] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:09.086 [2024-04-27 00:27:42.619367] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:09.086 passed 00:08:09.086 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-27 00:27:42.619575] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:09.086 passed 00:08:09.086 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-04-27 00:27:42.619642] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1887:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:09.086 [2024-04-27 00:27:42.619921] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:08:09.086 passed 00:08:09.086 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:09.086 Test: test_spdk_nvmf_ns_visible ...[2024-04-27 00:27:42.620149] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:09.086 passed 00:08:09.086 Test: test_reservation_register ...[2024-04-27 00:27:42.620540] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 [2024-04-27 00:27:42.620678] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3012:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:09.086 passed 00:08:09.086 Test: test_reservation_register_with_ptpl ...passed 00:08:09.086 Test: test_reservation_acquire_preempt_1 ...[2024-04-27 00:27:42.621702] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 passed 00:08:09.086 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:09.086 Test: test_reservation_release ...[2024-04-27 00:27:42.623388] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 passed 00:08:09.086 Test: test_reservation_unregister_notification ...[2024-04-27 00:27:42.623649] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 passed 00:08:09.086 Test: test_reservation_release_notification ...[2024-04-27 00:27:42.623884] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 passed 00:08:09.086 Test: test_reservation_release_notification_write_exclusive ...[2024-04-27 00:27:42.624111] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 passed 00:08:09.086 Test: test_reservation_clear_notification ...[2024-04-27 00:27:42.624343] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 passed 00:08:09.086 Test: test_reservation_preempt_notification ...[2024-04-27 00:27:42.624579] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2954:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:09.086 passed 00:08:09.086 Test: test_spdk_nvmf_ns_event ...passed 00:08:09.086 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:09.086 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:09.086 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-27 00:27:42.625361] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:09.086 passed 00:08:09.086 Test: test_nvmf_ns_reservation_report ...[2024-04-27 00:27:42.625449] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:08:09.086 passed 00:08:09.086 Test: test_nvmf_nqn_is_valid ...[2024-04-27 00:27:42.625587] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3317:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:09.086 [2024-04-27 00:27:42.625687] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:09.086 passed 00:08:09.086 Test: test_nvmf_ns_reservation_restore ...[2024-04-27 00:27:42.625761] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:64603307-daf7-47bf-995d-7a7afb5011a": uuid is not the correct length 00:08:09.086 [2024-04-27 00:27:42.625818] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:09.086 passed 00:08:09.086 Test: test_nvmf_subsystem_state_change ...[2024-04-27 00:27:42.625934] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2511:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:09.086 passed 00:08:09.086 Test: test_nvmf_reservation_custom_ops ...passed 00:08:09.086 00:08:09.086 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.086 suites 1 1 n/a 0 0 00:08:09.086 tests 24 24 24 0 0 00:08:09.086 asserts 497 497 497 0 n/a 00:08:09.086 00:08:09.086 Elapsed time = 0.010 seconds 00:08:09.086 00:27:42 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:09.086 00:08:09.086 00:08:09.086 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.086 http://cunit.sourceforge.net/ 00:08:09.086 00:08:09.086 00:08:09.086 Suite: nvmf 00:08:09.345 Test: test_nvmf_tcp_create ...[2024-04-27 00:27:42.686591] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 742:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:09.345 passed 00:08:09.346 Test: test_nvmf_tcp_destroy ...passed 00:08:09.346 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:09.346 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:09.346 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:09.346 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:09.346 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:09.346 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-04-27 00:27:42.790876] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-04-27 00:27:42.791006] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.791122] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.791181] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.791234] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_icreq_handle ...[2024-04-27 00:27:42.791372] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:09.346 [2024-04-27 00:27:42.791475] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.791550] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.791589] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2102:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:09.346 [2024-04-27 00:27:42.791633] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.791664] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.791718] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.791762] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:09.346 Test: test_nvmf_tcp_invalid_sgl ...[2024-04-27 00:27:42.791830] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.791927] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2497:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:09.346 [2024-04-27 00:27:42.791995] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.792040] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22360bb0 is same with the state(5) to be set 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-04-27 00:27:42.792109] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2229:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe22361910 00:08:09.346 [2024-04-27 00:27:42.792213] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.792271] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.792331] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2286:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe22361070 00:08:09.346 [2024-04-27 00:27:42.792374] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.792434] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.792484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2239:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:09.346 [2024-04-27 00:27:42.792530] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.792582] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.792633] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2278:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:09.346 [2024-04-27 00:27:42.792675] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.792717] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.792761] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.792814] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.792886] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.792932] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.792986] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.793029] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.793076] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.793120] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.793182] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-04-27 00:27:42.793214] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 [2024-04-27 00:27:42.793265] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1083:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:09.346 [2024-04-27 00:27:42.793300] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe22361070 is same with the state(5) to be set 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-04-27 00:27:42.819069] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-04-27 00:27:42.819175] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:09.346 [2024-04-27 00:27:42.819661] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:09.346 [2024-04-27 00:27:42.819733] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:09.346 passed 00:08:09.346 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-04-27 00:27:42.819993] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:09.346 passed 00:08:09.346 00:08:09.346 [2024-04-27 00:27:42.820053] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:09.346 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.346 suites 1 1 n/a 0 0 00:08:09.346 tests 17 17 17 0 0 00:08:09.346 asserts 222 222 222 0 n/a 00:08:09.346 00:08:09.346 Elapsed time = 0.158 seconds 00:08:09.346 00:27:42 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:09.346 00:08:09.346 00:08:09.346 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.346 http://cunit.sourceforge.net/ 00:08:09.346 00:08:09.346 00:08:09.346 Suite: nvmf 00:08:09.604 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:09.604 00:08:09.604 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.604 suites 1 1 n/a 0 0 00:08:09.604 tests 1 1 1 0 0 00:08:09.604 asserts 17 17 17 0 n/a 00:08:09.604 00:08:09.604 Elapsed time = 0.024 seconds 00:08:09.604 00:08:09.604 real 0m0.507s 00:08:09.604 user 0m0.286s 00:08:09.604 sys 0m0.216s 00:08:09.604 00:27:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.604 00:27:42 -- common/autotest_common.sh@10 -- # set +x 00:08:09.604 ************************************ 00:08:09.604 END TEST unittest_nvmf 00:08:09.604 ************************************ 00:08:09.604 00:27:43 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:09.604 00:27:43 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:09.604 00:27:43 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:09.604 00:27:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.604 00:27:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.604 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:09.604 ************************************ 00:08:09.604 START TEST unittest_nvmf_rdma 00:08:09.604 ************************************ 00:08:09.605 00:27:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:09.605 00:08:09.605 00:08:09.605 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.605 http://cunit.sourceforge.net/ 00:08:09.605 00:08:09.605 00:08:09.605 Suite: nvmf 00:08:09.605 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-27 00:27:43.096226] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1847:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:09.605 [2024-04-27 00:27:43.096622] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1897:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:09.605 [2024-04-27 00:27:43.096700] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1897:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:09.605 passed 00:08:09.605 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:09.605 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:09.605 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:09.605 Test: test_nvmf_rdma_opts_init ...passed 00:08:09.605 Test: test_nvmf_rdma_request_free_data ...passed 00:08:09.605 Test: test_nvmf_rdma_resources_create ...passed 00:08:09.605 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:09.605 Test: test_nvmf_rdma_resize_cq ...[2024-04-27 00:27:43.099524] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 935:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:09.605 Using CQ of insufficient size may lead to CQ overrun 00:08:09.605 passed 00:08:09.605 00:08:09.605 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.605 suites 1 1 n/a 0 0 00:08:09.605 tests 9 9 9 0 0 00:08:09.605 asserts 579 579 579 0 n/a 00:08:09.605 00:08:09.605 Elapsed time = 0.004 seconds[2024-04-27 00:27:43.099645] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 940:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:09.605 [2024-04-27 00:27:43.099716] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 948:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:09.605 00:08:09.605 00:08:09.605 real 0m0.042s 00:08:09.605 user 0m0.032s 00:08:09.605 sys 0m0.010s 00:08:09.605 00:27:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.605 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:09.605 ************************************ 00:08:09.605 END TEST unittest_nvmf_rdma 00:08:09.605 ************************************ 00:08:09.605 00:27:43 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:09.605 00:27:43 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:08:09.605 00:27:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.605 00:27:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.605 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:09.863 ************************************ 00:08:09.863 START TEST unittest_scsi 00:08:09.863 ************************************ 00:08:09.863 00:27:43 -- common/autotest_common.sh@1111 -- # unittest_scsi 00:08:09.863 00:27:43 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:09.863 00:08:09.863 00:08:09.863 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.863 http://cunit.sourceforge.net/ 00:08:09.863 00:08:09.863 00:08:09.863 Suite: dev_suite 00:08:09.863 Test: dev_destruct_null_dev ...passed 00:08:09.863 Test: dev_destruct_zero_luns ...passed 00:08:09.863 Test: dev_destruct_null_lun ...passed 00:08:09.863 Test: dev_destruct_success ...passed 00:08:09.863 Test: dev_construct_num_luns_zero ...[2024-04-27 00:27:43.225110] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:09.863 passed 00:08:09.863 Test: dev_construct_no_lun_zero ...[2024-04-27 00:27:43.225529] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:09.863 passed 00:08:09.864 Test: dev_construct_null_lun ...passed 00:08:09.864 Test: dev_construct_name_too_long ...[2024-04-27 00:27:43.225598] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:09.864 [2024-04-27 00:27:43.225661] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:09.864 passed 00:08:09.864 Test: dev_construct_success ...passed 00:08:09.864 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:09.864 Test: dev_queue_mgmt_task_success ...passed 00:08:09.864 Test: dev_queue_task_success ...passed 00:08:09.864 Test: dev_stop_success ...passed 00:08:09.864 Test: dev_add_port_max_ports ...[2024-04-27 00:27:43.226100] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:09.864 passed 00:08:09.864 Test: dev_add_port_construct_failure1 ...[2024-04-27 00:27:43.226241] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:09.864 passed 00:08:09.864 Test: dev_add_port_construct_failure2 ...passed 00:08:09.864 Test: dev_add_port_success1 ...passed 00:08:09.864 Test: dev_add_port_success2 ...passed 00:08:09.864 Test: dev_add_port_success3 ...passed 00:08:09.864 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:09.864 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:09.864 Test: dev_find_port_by_id_success ...passed 00:08:09.864 Test: dev_add_lun_bdev_not_found ...passed 00:08:09.864 Test: dev_add_lun_no_free_lun_id ...[2024-04-27 00:27:43.226516] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:09.864 passed 00:08:09.864 Test: dev_add_lun_success1 ...[2024-04-27 00:27:43.227037] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:09.864 passed 00:08:09.864 Test: dev_add_lun_success2 ...passed 00:08:09.864 Test: dev_check_pending_tasks ...passed 00:08:09.864 Test: dev_iterate_luns ...passed 00:08:09.864 Test: dev_find_free_lun ...passed 00:08:09.864 00:08:09.864 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.864 suites 1 1 n/a 0 0 00:08:09.864 tests 29 29 29 0 0 00:08:09.864 asserts 97 97 97 0 n/a 00:08:09.864 00:08:09.864 Elapsed time = 0.003 seconds 00:08:09.864 00:27:43 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:09.864 00:08:09.864 00:08:09.864 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.864 http://cunit.sourceforge.net/ 00:08:09.864 00:08:09.864 00:08:09.864 Suite: lun_suite 00:08:09.864 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-04-27 00:27:43.263262] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:09.864 passed 00:08:09.864 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-04-27 00:27:43.263636] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:09.864 passed 00:08:09.864 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:09.864 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:09.864 Test: lun_task_mgmt_execute_invalid_case ...[2024-04-27 00:27:43.263806] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:09.864 passed 00:08:09.864 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:09.864 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:09.864 Test: lun_append_task_null_lun_not_supported ...passed 00:08:09.864 Test: lun_execute_scsi_task_pending ...passed 00:08:09.864 Test: lun_execute_scsi_task_complete ...passed 00:08:09.864 Test: lun_execute_scsi_task_resize ...passed 00:08:09.864 Test: lun_destruct_success ...passed 00:08:09.864 Test: lun_construct_null_ctx ...passed 00:08:09.864 Test: lun_construct_success ...[2024-04-27 00:27:43.264025] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:09.864 passed 00:08:09.864 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:09.864 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:09.864 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:09.864 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:09.864 00:08:09.864 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.864 suites 1 1 n/a 0 0 00:08:09.864 tests 18 18 18 0 0 00:08:09.864 asserts 153 153 153 0 n/a 00:08:09.864 00:08:09.864 Elapsed time = 0.001 seconds 00:08:09.864 00:27:43 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:09.864 00:08:09.864 00:08:09.864 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.864 http://cunit.sourceforge.net/ 00:08:09.864 00:08:09.864 00:08:09.864 Suite: scsi_suite 00:08:09.864 Test: scsi_init ...passed 00:08:09.864 00:08:09.864 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.864 suites 1 1 n/a 0 0 00:08:09.864 tests 1 1 1 0 0 00:08:09.864 asserts 1 1 1 0 n/a 00:08:09.864 00:08:09.864 Elapsed time = 0.000 seconds 00:08:09.864 00:27:43 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:09.864 00:08:09.864 00:08:09.864 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.864 http://cunit.sourceforge.net/ 00:08:09.864 00:08:09.864 00:08:09.864 Suite: translation_suite 00:08:09.864 Test: mode_select_6_test ...passed 00:08:09.864 Test: mode_select_6_test2 ...passed 00:08:09.864 Test: mode_sense_6_test ...passed 00:08:09.864 Test: mode_sense_10_test ...passed 00:08:09.864 Test: inquiry_evpd_test ...passed 00:08:09.864 Test: inquiry_standard_test ...passed 00:08:09.864 Test: inquiry_overflow_test ...passed 00:08:09.864 Test: task_complete_test ...passed 00:08:09.864 Test: lba_range_test ...passed 00:08:09.864 Test: xfer_len_test ...[2024-04-27 00:27:43.324661] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:09.864 passed 00:08:09.864 Test: xfer_test ...passed 00:08:09.864 Test: scsi_name_padding_test ...passed 00:08:09.864 Test: get_dif_ctx_test ...passed 00:08:09.864 Test: unmap_split_test ...passed 00:08:09.864 00:08:09.864 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.864 suites 1 1 n/a 0 0 00:08:09.864 tests 14 14 14 0 0 00:08:09.864 asserts 1205 1205 1205 0 n/a 00:08:09.864 00:08:09.864 Elapsed time = 0.005 seconds 00:08:09.864 00:27:43 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:09.864 00:08:09.864 00:08:09.864 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.864 http://cunit.sourceforge.net/ 00:08:09.864 00:08:09.864 00:08:09.864 Suite: reservation_suite 00:08:09.864 Test: test_reservation_register ...[2024-04-27 00:27:43.348806] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:09.864 passed 00:08:09.864 Test: test_reservation_reserve ...[2024-04-27 00:27:43.349227] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:09.864 [2024-04-27 00:27:43.349308] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:09.864 passed 00:08:09.864 Test: test_reservation_preempt_non_all_regs ...[2024-04-27 00:27:43.349415] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:09.864 [2024-04-27 00:27:43.349488] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:09.864 [2024-04-27 00:27:43.349570] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:09.864 passed 00:08:09.864 Test: test_reservation_preempt_all_regs ...[2024-04-27 00:27:43.349732] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:09.864 passed 00:08:09.864 Test: test_reservation_cmds_conflict ...[2024-04-27 00:27:43.349882] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:09.864 [2024-04-27 00:27:43.349963] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:09.864 [2024-04-27 00:27:43.350019] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:09.864 [2024-04-27 00:27:43.350074] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:09.864 [2024-04-27 00:27:43.350126] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:09.864 [2024-04-27 00:27:43.350170] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:09.864 passed 00:08:09.864 Test: test_scsi2_reserve_release ...passed 00:08:09.864 Test: test_pr_with_scsi2_reserve_release ...[2024-04-27 00:27:43.350280] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:09.864 passed 00:08:09.864 00:08:09.864 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.864 suites 1 1 n/a 0 0 00:08:09.864 tests 7 7 7 0 0 00:08:09.864 asserts 257 257 257 0 n/a 00:08:09.864 00:08:09.864 Elapsed time = 0.002 seconds 00:08:09.864 00:08:09.864 real 0m0.153s 00:08:09.864 user 0m0.078s 00:08:09.864 sys 0m0.075s 00:08:09.864 00:27:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:09.864 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:09.864 ************************************ 00:08:09.864 END TEST unittest_scsi 00:08:09.865 ************************************ 00:08:09.865 00:27:43 -- unit/unittest.sh@276 -- # uname -s 00:08:09.865 00:27:43 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:08:09.865 00:27:43 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:08:09.865 00:27:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:09.865 00:27:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.865 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.124 ************************************ 00:08:10.124 START TEST unittest_sock 00:08:10.124 ************************************ 00:08:10.124 00:27:43 -- common/autotest_common.sh@1111 -- # unittest_sock 00:08:10.124 00:27:43 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:10.124 00:08:10.124 00:08:10.124 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.124 http://cunit.sourceforge.net/ 00:08:10.124 00:08:10.124 00:08:10.124 Suite: sock 00:08:10.124 Test: posix_sock ...passed 00:08:10.124 Test: ut_sock ...passed 00:08:10.124 Test: posix_sock_group ...passed 00:08:10.124 Test: ut_sock_group ...passed 00:08:10.124 Test: posix_sock_group_fairness ...passed 00:08:10.124 Test: _posix_sock_close ...passed 00:08:10.124 Test: sock_get_default_opts ...passed 00:08:10.124 Test: ut_sock_impl_get_set_opts ...passed 00:08:10.124 Test: posix_sock_impl_get_set_opts ...passed 00:08:10.124 Test: ut_sock_map ...passed 00:08:10.124 Test: override_impl_opts ...passed 00:08:10.124 Test: ut_sock_group_get_ctx ...passed 00:08:10.124 00:08:10.124 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.124 suites 1 1 n/a 0 0 00:08:10.124 tests 12 12 12 0 0 00:08:10.124 asserts 349 349 349 0 n/a 00:08:10.124 00:08:10.124 Elapsed time = 0.009 seconds 00:08:10.124 00:27:43 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:10.124 00:08:10.124 00:08:10.124 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.124 http://cunit.sourceforge.net/ 00:08:10.124 00:08:10.124 00:08:10.124 Suite: posix 00:08:10.124 Test: flush ...passed 00:08:10.124 00:08:10.124 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.124 suites 1 1 n/a 0 0 00:08:10.124 tests 1 1 1 0 0 00:08:10.124 asserts 28 28 28 0 n/a 00:08:10.124 00:08:10.124 Elapsed time = 0.000 seconds 00:08:10.124 00:27:43 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:10.124 00:08:10.124 real 0m0.091s 00:08:10.124 user 0m0.039s 00:08:10.124 sys 0m0.029s 00:08:10.124 00:27:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.124 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.124 ************************************ 00:08:10.124 END TEST unittest_sock 00:08:10.124 ************************************ 00:08:10.124 00:27:43 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:10.124 00:27:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.124 00:27:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.124 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.124 ************************************ 00:08:10.124 START TEST unittest_thread 00:08:10.124 ************************************ 00:08:10.124 00:27:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:10.124 00:08:10.124 00:08:10.124 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.124 http://cunit.sourceforge.net/ 00:08:10.124 00:08:10.124 00:08:10.124 Suite: io_channel 00:08:10.124 Test: thread_alloc ...passed 00:08:10.124 Test: thread_send_msg ...passed 00:08:10.124 Test: thread_poller ...passed 00:08:10.124 Test: poller_pause ...passed 00:08:10.124 Test: thread_for_each ...passed 00:08:10.124 Test: for_each_channel_remove ...passed 00:08:10.124 Test: for_each_channel_unreg ...[2024-04-27 00:27:43.655356] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffc639fde90 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:10.124 passed 00:08:10.124 Test: thread_name ...passed 00:08:10.124 Test: channel ...[2024-04-27 00:27:43.659419] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x55d5744349e0 00:08:10.124 passed 00:08:10.124 Test: channel_destroy_races ...passed 00:08:10.124 Test: thread_exit_test ...[2024-04-27 00:27:43.664585] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:08:10.124 passed 00:08:10.124 Test: thread_update_stats_test ...passed 00:08:10.124 Test: nested_channel ...passed 00:08:10.124 Test: device_unregister_and_thread_exit_race ...passed 00:08:10.124 Test: cache_closest_timed_poller ...passed 00:08:10.124 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:10.124 Test: io_device_lookup ...passed 00:08:10.124 Test: spdk_spin ...[2024-04-27 00:27:43.675774] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:10.124 [2024-04-27 00:27:43.675840] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc639fde80 00:08:10.124 [2024-04-27 00:27:43.675951] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:10.124 [2024-04-27 00:27:43.677583] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:10.124 [2024-04-27 00:27:43.677661] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc639fde80 00:08:10.124 [2024-04-27 00:27:43.677709] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:10.124 [2024-04-27 00:27:43.677752] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc639fde80 00:08:10.124 [2024-04-27 00:27:43.677791] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:10.124 [2024-04-27 00:27:43.677836] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc639fde80 00:08:10.124 [2024-04-27 00:27:43.677873] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:10.124 [2024-04-27 00:27:43.677923] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffc639fde80 00:08:10.124 passed 00:08:10.124 Test: for_each_channel_and_thread_exit_race ...passed 00:08:10.124 Test: for_each_thread_and_thread_exit_race ...passed 00:08:10.124 00:08:10.124 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.124 suites 1 1 n/a 0 0 00:08:10.124 tests 20 20 20 0 0 00:08:10.124 asserts 409 409 409 0 n/a 00:08:10.124 00:08:10.124 Elapsed time = 0.050 seconds 00:08:10.124 00:08:10.124 real 0m0.084s 00:08:10.124 user 0m0.072s 00:08:10.124 sys 0m0.012s 00:08:10.124 00:27:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.124 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.124 ************************************ 00:08:10.124 END TEST unittest_thread 00:08:10.124 ************************************ 00:08:10.384 00:27:43 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:10.384 00:27:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.384 00:27:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.384 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.384 ************************************ 00:08:10.384 START TEST unittest_iobuf 00:08:10.384 ************************************ 00:08:10.384 00:27:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:10.384 00:08:10.384 00:08:10.384 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.384 http://cunit.sourceforge.net/ 00:08:10.384 00:08:10.384 00:08:10.384 Suite: io_channel 00:08:10.384 Test: iobuf ...passed 00:08:10.384 Test: iobuf_cache ...[2024-04-27 00:27:43.807279] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:10.384 [2024-04-27 00:27:43.807584] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:10.384 [2024-04-27 00:27:43.807754] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 323:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:10.384 [2024-04-27 00:27:43.807820] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 326:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:10.384 [2024-04-27 00:27:43.807921] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:10.384 [2024-04-27 00:27:43.807974] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:10.384 passed 00:08:10.384 00:08:10.384 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.384 suites 1 1 n/a 0 0 00:08:10.384 tests 2 2 2 0 0 00:08:10.384 asserts 107 107 107 0 n/a 00:08:10.384 00:08:10.384 Elapsed time = 0.008 seconds 00:08:10.384 00:08:10.384 real 0m0.042s 00:08:10.384 user 0m0.021s 00:08:10.384 sys 0m0.021s 00:08:10.384 00:27:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.384 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.384 ************************************ 00:08:10.384 END TEST unittest_iobuf 00:08:10.384 ************************************ 00:08:10.384 00:27:43 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:08:10.384 00:27:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.384 00:27:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.384 00:27:43 -- common/autotest_common.sh@10 -- # set +x 00:08:10.384 ************************************ 00:08:10.384 START TEST unittest_util 00:08:10.384 ************************************ 00:08:10.384 00:27:43 -- common/autotest_common.sh@1111 -- # unittest_util 00:08:10.384 00:27:43 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:10.384 00:08:10.384 00:08:10.384 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.384 http://cunit.sourceforge.net/ 00:08:10.384 00:08:10.384 00:08:10.384 Suite: base64 00:08:10.384 Test: test_base64_get_encoded_strlen ...passed 00:08:10.384 Test: test_base64_get_decoded_len ...passed 00:08:10.384 Test: test_base64_encode ...passed 00:08:10.384 Test: test_base64_decode ...passed 00:08:10.384 Test: test_base64_urlsafe_encode ...passed 00:08:10.384 Test: test_base64_urlsafe_decode ...passed 00:08:10.384 00:08:10.384 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.384 suites 1 1 n/a 0 0 00:08:10.384 tests 6 6 6 0 0 00:08:10.384 asserts 112 112 112 0 n/a 00:08:10.384 00:08:10.384 Elapsed time = 0.000 seconds 00:08:10.384 00:27:43 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:10.384 00:08:10.384 00:08:10.384 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.384 http://cunit.sourceforge.net/ 00:08:10.384 00:08:10.384 00:08:10.384 Suite: bit_array 00:08:10.384 Test: test_1bit ...passed 00:08:10.384 Test: test_64bit ...passed 00:08:10.384 Test: test_find ...passed 00:08:10.384 Test: test_resize ...passed 00:08:10.384 Test: test_errors ...passed 00:08:10.384 Test: test_count ...passed 00:08:10.384 Test: test_mask_store_load ...passed 00:08:10.384 Test: test_mask_clear ...passed 00:08:10.384 00:08:10.384 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.384 suites 1 1 n/a 0 0 00:08:10.384 tests 8 8 8 0 0 00:08:10.384 asserts 5075 5075 5075 0 n/a 00:08:10.384 00:08:10.384 Elapsed time = 0.002 seconds 00:08:10.643 00:27:43 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:10.643 00:08:10.643 00:08:10.643 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.643 http://cunit.sourceforge.net/ 00:08:10.643 00:08:10.643 00:08:10.643 Suite: cpuset 00:08:10.643 Test: test_cpuset ...passed 00:08:10.643 Test: test_cpuset_parse ...[2024-04-27 00:27:43.987178] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:10.643 [2024-04-27 00:27:43.987502] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:10.643 [2024-04-27 00:27:43.987620] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:10.643 [2024-04-27 00:27:43.987717] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:10.643 [2024-04-27 00:27:43.987775] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:10.643 [2024-04-27 00:27:43.987826] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:10.643 [2024-04-27 00:27:43.987872] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:10.643 [2024-04-27 00:27:43.987934] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:10.643 passed 00:08:10.643 Test: test_cpuset_fmt ...passed 00:08:10.643 00:08:10.644 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.644 suites 1 1 n/a 0 0 00:08:10.644 tests 3 3 3 0 0 00:08:10.644 asserts 65 65 65 0 n/a 00:08:10.644 00:08:10.644 Elapsed time = 0.002 seconds 00:08:10.644 00:27:44 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:10.644 00:08:10.644 00:08:10.644 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.644 http://cunit.sourceforge.net/ 00:08:10.644 00:08:10.644 00:08:10.644 Suite: crc16 00:08:10.644 Test: test_crc16_t10dif ...passed 00:08:10.644 Test: test_crc16_t10dif_seed ...passed 00:08:10.644 Test: test_crc16_t10dif_copy ...passed 00:08:10.644 00:08:10.644 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.644 suites 1 1 n/a 0 0 00:08:10.644 tests 3 3 3 0 0 00:08:10.644 asserts 5 5 5 0 n/a 00:08:10.644 00:08:10.644 Elapsed time = 0.000 seconds 00:08:10.644 00:27:44 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:10.644 00:08:10.644 00:08:10.644 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.644 http://cunit.sourceforge.net/ 00:08:10.644 00:08:10.644 00:08:10.644 Suite: crc32_ieee 00:08:10.644 Test: test_crc32_ieee ...passed 00:08:10.644 00:08:10.644 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.644 suites 1 1 n/a 0 0 00:08:10.644 tests 1 1 1 0 0 00:08:10.644 asserts 1 1 1 0 n/a 00:08:10.644 00:08:10.644 Elapsed time = 0.000 seconds 00:08:10.644 00:27:44 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:10.644 00:08:10.644 00:08:10.644 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.644 http://cunit.sourceforge.net/ 00:08:10.644 00:08:10.644 00:08:10.644 Suite: crc32c 00:08:10.644 Test: test_crc32c ...passed 00:08:10.644 Test: test_crc32c_nvme ...passed 00:08:10.644 00:08:10.644 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.644 suites 1 1 n/a 0 0 00:08:10.644 tests 2 2 2 0 0 00:08:10.644 asserts 16 16 16 0 n/a 00:08:10.644 00:08:10.644 Elapsed time = 0.000 seconds 00:08:10.644 00:27:44 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:10.644 00:08:10.644 00:08:10.644 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.644 http://cunit.sourceforge.net/ 00:08:10.644 00:08:10.644 00:08:10.644 Suite: crc64 00:08:10.644 Test: test_crc64_nvme ...passed 00:08:10.644 00:08:10.644 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.644 suites 1 1 n/a 0 0 00:08:10.644 tests 1 1 1 0 0 00:08:10.644 asserts 4 4 4 0 n/a 00:08:10.644 00:08:10.644 Elapsed time = 0.001 seconds 00:08:10.644 00:27:44 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:10.644 00:08:10.644 00:08:10.644 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.644 http://cunit.sourceforge.net/ 00:08:10.644 00:08:10.644 00:08:10.644 Suite: string 00:08:10.644 Test: test_parse_ip_addr ...passed 00:08:10.644 Test: test_str_chomp ...passed 00:08:10.644 Test: test_parse_capacity ...passed 00:08:10.644 Test: test_sprintf_append_realloc ...passed 00:08:10.644 Test: test_strtol ...passed 00:08:10.644 Test: test_strtoll ...passed 00:08:10.644 Test: test_strarray ...passed 00:08:10.644 Test: test_strcpy_replace ...passed 00:08:10.644 00:08:10.644 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.644 suites 1 1 n/a 0 0 00:08:10.644 tests 8 8 8 0 0 00:08:10.644 asserts 161 161 161 0 n/a 00:08:10.644 00:08:10.644 Elapsed time = 0.001 seconds 00:08:10.644 00:27:44 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:10.644 00:08:10.644 00:08:10.644 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.644 http://cunit.sourceforge.net/ 00:08:10.644 00:08:10.644 00:08:10.644 Suite: dif 00:08:10.644 Test: dif_generate_and_verify_test ...[2024-04-27 00:27:44.169433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:10.644 [2024-04-27 00:27:44.169973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:10.644 [2024-04-27 00:27:44.170266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:10.644 [2024-04-27 00:27:44.170573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:10.644 [2024-04-27 00:27:44.170922] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:10.644 [2024-04-27 00:27:44.171236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:10.644 passed 00:08:10.644 Test: dif_disable_check_test ...[2024-04-27 00:27:44.172224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:10.644 [2024-04-27 00:27:44.172608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:10.644 [2024-04-27 00:27:44.172883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:10.644 passed 00:08:10.644 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-27 00:27:44.173916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:10.644 [2024-04-27 00:27:44.174227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:10.644 [2024-04-27 00:27:44.174625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:10.644 [2024-04-27 00:27:44.174998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:10.644 [2024-04-27 00:27:44.175362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:10.644 [2024-04-27 00:27:44.175702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:10.644 [2024-04-27 00:27:44.175993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:10.644 [2024-04-27 00:27:44.176314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:10.644 [2024-04-27 00:27:44.176640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:10.644 [2024-04-27 00:27:44.176926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:10.644 [2024-04-27 00:27:44.177251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:10.644 passed 00:08:10.644 Test: dif_apptag_mask_test ...[2024-04-27 00:27:44.177608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:10.644 [2024-04-27 00:27:44.177921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:10.644 passed 00:08:10.644 Test: dif_sec_512_md_0_error_test ...[2024-04-27 00:27:44.178135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:10.644 passed 00:08:10.644 Test: dif_sec_4096_md_0_error_test ...passed[2024-04-27 00:27:44.178189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:10.644 [2024-04-27 00:27:44.178238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:10.645 00:08:10.645 Test: dif_sec_4100_md_128_error_test ...[2024-04-27 00:27:44.178302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:10.645 [2024-04-27 00:27:44.178390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:10.645 passed 00:08:10.645 Test: dif_guard_seed_test ...passed 00:08:10.645 Test: dif_guard_value_test ...passed 00:08:10.645 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:10.645 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:10.645 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-27 00:27:44.219603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd6c, Actual=fd4c 00:08:10.645 [2024-04-27 00:27:44.221819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fe01, Actual=fe21 00:08:10.645 [2024-04-27 00:27:44.223983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.645 [2024-04-27 00:27:44.226568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.645 [2024-04-27 00:27:44.228799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.906 [2024-04-27 00:27:44.231296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.906 [2024-04-27 00:27:44.233835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=6d93 00:08:10.906 [2024-04-27 00:27:44.235992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fe21, Actual=71fc 00:08:10.906 [2024-04-27 00:27:44.238153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1a9753ed, Actual=1ab753ed 00:08:10.906 [2024-04-27 00:27:44.240609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38774660, Actual=38574660 00:08:10.906 [2024-04-27 00:27:44.242989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.906 [2024-04-27 00:27:44.245346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.906 [2024-04-27 00:27:44.247811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.906 [2024-04-27 00:27:44.250317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.906 [2024-04-27 00:27:44.252643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=60906322 00:08:10.906 [2024-04-27 00:27:44.254860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=38574660, Actual=46d47a66 00:08:10.906 [2024-04-27 00:27:44.256867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.906 [2024-04-27 00:27:44.259319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d4817a266, Actual=88010a2d4837a266 00:08:10.906 [2024-04-27 00:27:44.261858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.906 [2024-04-27 00:27:44.264416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.906 [2024-04-27 00:27:44.267194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=7f 00:08:10.906 [2024-04-27 00:27:44.269734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=7f 00:08:10.906 [2024-04-27 00:27:44.272471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.906 [2024-04-27 00:27:44.274813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=88010a2d4837a266, Actual=544d2680fad9837e 00:08:10.906 passed 00:08:10.906 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-27 00:27:44.276087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:10.906 [2024-04-27 00:27:44.276442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:10.906 [2024-04-27 00:27:44.276768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.906 [2024-04-27 00:27:44.277120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.906 [2024-04-27 00:27:44.277455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.906 [2024-04-27 00:27:44.277797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.906 [2024-04-27 00:27:44.278119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6d93 00:08:10.906 [2024-04-27 00:27:44.278383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=71fc 00:08:10.906 [2024-04-27 00:27:44.278638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:08:10.906 [2024-04-27 00:27:44.278952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:08:10.907 [2024-04-27 00:27:44.279291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.279639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.279966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.280307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.280629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=60906322 00:08:10.907 [2024-04-27 00:27:44.280885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=46d47a66 00:08:10.907 [2024-04-27 00:27:44.281164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.907 [2024-04-27 00:27:44.281477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4817a266, Actual=88010a2d4837a266 00:08:10.907 [2024-04-27 00:27:44.281844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.282159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.282494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.907 [2024-04-27 00:27:44.282809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.907 [2024-04-27 00:27:44.283152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.907 [2024-04-27 00:27:44.283408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=544d2680fad9837e 00:08:10.907 passed 00:08:10.907 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-27 00:27:44.283718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:10.907 [2024-04-27 00:27:44.284045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:10.907 [2024-04-27 00:27:44.284351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.284676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.285044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.285362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.285707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6d93 00:08:10.907 [2024-04-27 00:27:44.286336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=71fc 00:08:10.907 [2024-04-27 00:27:44.286825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:08:10.907 [2024-04-27 00:27:44.287384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:08:10.907 [2024-04-27 00:27:44.287949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.288505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.289082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.289630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.290225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=60906322 00:08:10.907 [2024-04-27 00:27:44.290691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=46d47a66 00:08:10.907 [2024-04-27 00:27:44.291220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.907 [2024-04-27 00:27:44.291748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4817a266, Actual=88010a2d4837a266 00:08:10.907 [2024-04-27 00:27:44.292343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.292915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.293526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.907 [2024-04-27 00:27:44.294134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.907 [2024-04-27 00:27:44.294749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.907 [2024-04-27 00:27:44.295236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=544d2680fad9837e 00:08:10.907 passed 00:08:10.907 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-27 00:27:44.295775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:10.907 [2024-04-27 00:27:44.296358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:10.907 [2024-04-27 00:27:44.296933] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.297508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.298178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.298781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.299352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6d93 00:08:10.907 [2024-04-27 00:27:44.299798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=71fc 00:08:10.907 [2024-04-27 00:27:44.300256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:08:10.907 [2024-04-27 00:27:44.300803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:08:10.907 [2024-04-27 00:27:44.301415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.302020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.302335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.302660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.302967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=60906322 00:08:10.907 [2024-04-27 00:27:44.303220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=46d47a66 00:08:10.907 [2024-04-27 00:27:44.303483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.907 [2024-04-27 00:27:44.303798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4817a266, Actual=88010a2d4837a266 00:08:10.907 [2024-04-27 00:27:44.304114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.304415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.304733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.907 [2024-04-27 00:27:44.305050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.907 [2024-04-27 00:27:44.305381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.907 [2024-04-27 00:27:44.305635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=544d2680fad9837e 00:08:10.907 passed 00:08:10.907 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-04-27 00:27:44.305950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:10.907 [2024-04-27 00:27:44.306256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:10.907 [2024-04-27 00:27:44.306579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.306889] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.307220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.307537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.907 [2024-04-27 00:27:44.307848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6d93 00:08:10.907 [2024-04-27 00:27:44.308096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=71fc 00:08:10.907 passed 00:08:10.907 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-04-27 00:27:44.308372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:08:10.907 [2024-04-27 00:27:44.308699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:08:10.907 [2024-04-27 00:27:44.309027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.309329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.907 [2024-04-27 00:27:44.309646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.908 [2024-04-27 00:27:44.309974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.908 [2024-04-27 00:27:44.310289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=60906322 00:08:10.908 [2024-04-27 00:27:44.310554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=46d47a66 00:08:10.908 [2024-04-27 00:27:44.310852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.908 [2024-04-27 00:27:44.311168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4817a266, Actual=88010a2d4837a266 00:08:10.908 [2024-04-27 00:27:44.311473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.311785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.312094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.908 [2024-04-27 00:27:44.312410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.908 [2024-04-27 00:27:44.312747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.908 [2024-04-27 00:27:44.312992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=544d2680fad9837e 00:08:10.908 passed 00:08:10.908 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-27 00:27:44.313287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:10.908 [2024-04-27 00:27:44.313602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:10.908 [2024-04-27 00:27:44.313942] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.314251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.314603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.908 [2024-04-27 00:27:44.314925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.908 [2024-04-27 00:27:44.315237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=6d93 00:08:10.908 [2024-04-27 00:27:44.315480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=71fc 00:08:10.908 passed 00:08:10.908 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-27 00:27:44.315780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1a9753ed, Actual=1ab753ed 00:08:10.908 [2024-04-27 00:27:44.316091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38774660, Actual=38574660 00:08:10.908 [2024-04-27 00:27:44.316412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.316725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.317035] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.908 [2024-04-27 00:27:44.317343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:10.908 [2024-04-27 00:27:44.317654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=60906322 00:08:10.908 [2024-04-27 00:27:44.317917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=46d47a66 00:08:10.908 [2024-04-27 00:27:44.318228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.908 [2024-04-27 00:27:44.318549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4817a266, Actual=88010a2d4837a266 00:08:10.908 [2024-04-27 00:27:44.318876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.319183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.319498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.908 [2024-04-27 00:27:44.319800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:10.908 [2024-04-27 00:27:44.320123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.908 [2024-04-27 00:27:44.320371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=544d2680fad9837e 00:08:10.908 passed 00:08:10.908 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:10.908 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:10.908 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:10.908 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:10.908 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:10.908 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:10.908 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:10.908 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:10.908 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:10.908 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-27 00:27:44.364664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd6c, Actual=fd4c 00:08:10.908 [2024-04-27 00:27:44.365810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d9c, Actual=5dbc 00:08:10.908 [2024-04-27 00:27:44.366943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.368085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.369219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.908 [2024-04-27 00:27:44.370386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.908 [2024-04-27 00:27:44.371514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=6d93 00:08:10.908 [2024-04-27 00:27:44.372668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=8a87 00:08:10.908 [2024-04-27 00:27:44.373816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1a9753ed, Actual=1ab753ed 00:08:10.908 [2024-04-27 00:27:44.374955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=927e7ad2, Actual=925e7ad2 00:08:10.908 [2024-04-27 00:27:44.376129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.377291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.378481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.908 [2024-04-27 00:27:44.379628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.908 [2024-04-27 00:27:44.380764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=60906322 00:08:10.908 [2024-04-27 00:27:44.381912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=c688781c 00:08:10.908 [2024-04-27 00:27:44.383048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.908 [2024-04-27 00:27:44.384227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=851e2abd9e9ee61a, Actual=851e2abd9ebee61a 00:08:10.908 [2024-04-27 00:27:44.385350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.386515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.387656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=7f 00:08:10.908 [2024-04-27 00:27:44.388798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=7f 00:08:10.908 [2024-04-27 00:27:44.389940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.908 passed 00:08:10.908 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-27 00:27:44.391167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=811b57b5e2799bb2 00:08:10.908 [2024-04-27 00:27:44.391556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd6c, Actual=fd4c 00:08:10.908 [2024-04-27 00:27:44.391831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ca06, Actual=ca26 00:08:10.908 [2024-04-27 00:27:44.392110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.392389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.392697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.908 [2024-04-27 00:27:44.393003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.908 [2024-04-27 00:27:44.393275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=6d93 00:08:10.908 [2024-04-27 00:27:44.393559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=1d1d 00:08:10.908 [2024-04-27 00:27:44.393842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a9753ed, Actual=1ab753ed 00:08:10.908 [2024-04-27 00:27:44.394128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=b1c86ea5, Actual=b1e86ea5 00:08:10.908 [2024-04-27 00:27:44.394432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.394725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.908 [2024-04-27 00:27:44.395006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.909 [2024-04-27 00:27:44.395301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.909 [2024-04-27 00:27:44.395574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=60906322 00:08:10.909 [2024-04-27 00:27:44.395858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=e53e6c6b 00:08:10.909 [2024-04-27 00:27:44.396213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.909 [2024-04-27 00:27:44.396536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9061b1ce874d69df, Actual=9061b1ce876d69df 00:08:10.909 [2024-04-27 00:27:44.396878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.397182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.397467] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:08:10.909 [2024-04-27 00:27:44.397759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:08:10.909 [2024-04-27 00:27:44.398061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.909 [2024-04-27 00:27:44.398366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=9464ccc6fbaa1477 00:08:10.909 passed 00:08:10.909 Test: dix_sec_512_md_0_error ...[2024-04-27 00:27:44.398449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:10.909 passed 00:08:10.909 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:10.909 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:10.909 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:10.909 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:10.909 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:10.909 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:10.909 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:10.909 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:10.909 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:10.909 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-27 00:27:44.442016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd6c, Actual=fd4c 00:08:10.909 [2024-04-27 00:27:44.443189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d9c, Actual=5dbc 00:08:10.909 [2024-04-27 00:27:44.444316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.445440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.446592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.909 [2024-04-27 00:27:44.447735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.909 [2024-04-27 00:27:44.448871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=fd4c, Actual=6d93 00:08:10.909 [2024-04-27 00:27:44.450005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=55a, Actual=8a87 00:08:10.909 [2024-04-27 00:27:44.451158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1a9753ed, Actual=1ab753ed 00:08:10.909 [2024-04-27 00:27:44.452290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=927e7ad2, Actual=925e7ad2 00:08:10.909 [2024-04-27 00:27:44.453416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.454565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.455697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.909 [2024-04-27 00:27:44.456828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=20005f 00:08:10.909 [2024-04-27 00:27:44.457962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=1ab753ed, Actual=60906322 00:08:10.909 [2024-04-27 00:27:44.459117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=b80b441a, Actual=c688781c 00:08:10.909 [2024-04-27 00:27:44.460261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.909 [2024-04-27 00:27:44.461374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=851e2abd9e9ee61a, Actual=851e2abd9ebee61a 00:08:10.909 [2024-04-27 00:27:44.462540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.463672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=95, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.464823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=7f 00:08:10.909 [2024-04-27 00:27:44.465987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=95, Expected=5f, Actual=7f 00:08:10.909 [2024-04-27 00:27:44.467192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.909 passed 00:08:10.909 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-27 00:27:44.468343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=95, Expected=5d577b185097baaa, Actual=811b57b5e2799bb2 00:08:10.909 [2024-04-27 00:27:44.468740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd6c, Actual=fd4c 00:08:10.909 [2024-04-27 00:27:44.469030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ca06, Actual=ca26 00:08:10.909 [2024-04-27 00:27:44.469338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.469630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.469950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.909 [2024-04-27 00:27:44.470233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.909 [2024-04-27 00:27:44.470527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=6d93 00:08:10.909 [2024-04-27 00:27:44.470803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=1d1d 00:08:10.909 [2024-04-27 00:27:44.471089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1a9753ed, Actual=1ab753ed 00:08:10.909 [2024-04-27 00:27:44.471366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=b1c86ea5, Actual=b1e86ea5 00:08:10.909 [2024-04-27 00:27:44.471659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.471941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.472228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.909 [2024-04-27 00:27:44.472504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200059 00:08:10.909 [2024-04-27 00:27:44.472770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=60906322 00:08:10.909 [2024-04-27 00:27:44.473065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=e53e6c6b 00:08:10.909 [2024-04-27 00:27:44.473351] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728eec20d3, Actual=a576a7728ecc20d3 00:08:10.909 [2024-04-27 00:27:44.473642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9061b1ce874d69df, Actual=9061b1ce876d69df 00:08:10.909 [2024-04-27 00:27:44.473920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.474216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=a8 00:08:10.909 [2024-04-27 00:27:44.474509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:08:10.909 [2024-04-27 00:27:44.474802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=79 00:08:10.909 [2024-04-27 00:27:44.475079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=31aa774219bcb7e2 00:08:10.909 [2024-04-27 00:27:44.475358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=9464ccc6fbaa1477 00:08:10.909 passed 00:08:10.909 Test: set_md_interleave_iovs_test ...passed 00:08:10.909 Test: set_md_interleave_iovs_split_test ...passed 00:08:10.909 Test: dif_generate_stream_pi_16_test ...passed 00:08:10.909 Test: dif_generate_stream_test ...passed 00:08:10.909 Test: set_md_interleave_iovs_alignment_test ...[2024-04-27 00:27:44.482912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:10.909 passed 00:08:10.909 Test: dif_generate_split_test ...passed 00:08:10.909 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:10.909 Test: dif_verify_split_test ...passed 00:08:11.168 Test: dif_verify_stream_multi_segments_test ...passed 00:08:11.168 Test: update_crc32c_pi_16_test ...passed 00:08:11.168 Test: update_crc32c_test ...passed 00:08:11.168 Test: dif_update_crc32c_split_test ...passed 00:08:11.168 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:11.168 Test: get_range_with_md_test ...passed 00:08:11.168 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:11.169 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:11.169 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:11.169 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:11.169 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:11.169 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:11.169 Test: dif_generate_and_verify_unmap_test ...passed 00:08:11.169 00:08:11.169 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.169 suites 1 1 n/a 0 0 00:08:11.169 tests 79 79 79 0 0 00:08:11.169 asserts 3584 3584 3584 0 n/a 00:08:11.169 00:08:11.169 Elapsed time = 0.356 seconds 00:08:11.169 00:27:44 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:11.169 00:08:11.169 00:08:11.169 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.169 http://cunit.sourceforge.net/ 00:08:11.169 00:08:11.169 00:08:11.169 Suite: iov 00:08:11.169 Test: test_single_iov ...passed 00:08:11.169 Test: test_simple_iov ...passed 00:08:11.169 Test: test_complex_iov ...passed 00:08:11.169 Test: test_iovs_to_buf ...passed 00:08:11.169 Test: test_buf_to_iovs ...passed 00:08:11.169 Test: test_memset ...passed 00:08:11.169 Test: test_iov_one ...passed 00:08:11.169 Test: test_iov_xfer ...passed 00:08:11.169 00:08:11.169 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.169 suites 1 1 n/a 0 0 00:08:11.169 tests 8 8 8 0 0 00:08:11.169 asserts 156 156 156 0 n/a 00:08:11.169 00:08:11.169 Elapsed time = 0.000 seconds 00:08:11.169 00:27:44 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:11.169 00:08:11.169 00:08:11.169 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.169 http://cunit.sourceforge.net/ 00:08:11.169 00:08:11.169 00:08:11.169 Suite: math 00:08:11.169 Test: test_serial_number_arithmetic ...passed 00:08:11.169 Suite: erase 00:08:11.169 Test: test_memset_s ...passed 00:08:11.169 00:08:11.169 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.169 suites 2 2 n/a 0 0 00:08:11.169 tests 2 2 2 0 0 00:08:11.169 asserts 18 18 18 0 n/a 00:08:11.169 00:08:11.169 Elapsed time = 0.000 seconds 00:08:11.169 00:27:44 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:11.169 00:08:11.169 00:08:11.169 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.169 http://cunit.sourceforge.net/ 00:08:11.169 00:08:11.169 00:08:11.169 Suite: pipe 00:08:11.169 Test: test_create_destroy ...passed 00:08:11.169 Test: test_write_get_buffer ...passed 00:08:11.169 Test: test_write_advance ...passed 00:08:11.169 Test: test_read_get_buffer ...passed 00:08:11.169 Test: test_read_advance ...passed 00:08:11.169 Test: test_data ...passed 00:08:11.169 00:08:11.169 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.169 suites 1 1 n/a 0 0 00:08:11.169 tests 6 6 6 0 0 00:08:11.169 asserts 251 251 251 0 n/a 00:08:11.169 00:08:11.169 Elapsed time = 0.000 seconds 00:08:11.169 00:27:44 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:11.169 00:08:11.169 00:08:11.169 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.169 http://cunit.sourceforge.net/ 00:08:11.169 00:08:11.169 00:08:11.169 Suite: xor 00:08:11.169 Test: test_xor_gen ...passed 00:08:11.169 00:08:11.169 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.169 suites 1 1 n/a 0 0 00:08:11.169 tests 1 1 1 0 0 00:08:11.169 asserts 17 17 17 0 n/a 00:08:11.169 00:08:11.169 Elapsed time = 0.007 seconds 00:08:11.169 00:08:11.169 real 0m0.749s 00:08:11.169 user 0m0.571s 00:08:11.169 sys 0m0.182s 00:08:11.169 00:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.169 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.169 ************************************ 00:08:11.169 END TEST unittest_util 00:08:11.169 ************************************ 00:08:11.169 00:27:44 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:11.169 00:27:44 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:11.169 00:27:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.169 00:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.169 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.169 ************************************ 00:08:11.169 START TEST unittest_vhost 00:08:11.169 ************************************ 00:08:11.169 00:27:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:11.427 00:08:11.427 00:08:11.427 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.427 http://cunit.sourceforge.net/ 00:08:11.427 00:08:11.427 00:08:11.427 Suite: vhost_suite 00:08:11.427 Test: desc_to_iov_test ...[2024-04-27 00:27:44.768007] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:11.427 passed 00:08:11.427 Test: create_controller_test ...[2024-04-27 00:27:44.771416] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:11.427 [2024-04-27 00:27:44.771539] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:11.427 [2024-04-27 00:27:44.771636] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:11.427 [2024-04-27 00:27:44.771754] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:11.427 [2024-04-27 00:27:44.771809] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:11.427 [2024-04-27 00:27:44.771901] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1782:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxpassed 00:08:11.427 Test: session_find_by_vid_test ...[2024-04-27 00:27:44.772679] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:11.427 passed 00:08:11.427 Test: remove_controller_test ...[2024-04-27 00:27:44.774374] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1867:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:11.427 passed 00:08:11.427 Test: vq_avail_ring_get_test ...passed 00:08:11.427 Test: vq_packed_ring_test ...passed 00:08:11.427 Test: vhost_blk_construct_test ...passed 00:08:11.427 00:08:11.427 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.427 suites 1 1 n/a 0 0 00:08:11.427 tests 7 7 7 0 0 00:08:11.427 asserts 147 147 147 0 n/a 00:08:11.427 00:08:11.427 Elapsed time = 0.010 seconds 00:08:11.427 00:08:11.427 real 0m0.045s 00:08:11.427 user 0m0.021s 00:08:11.427 sys 0m0.025s 00:08:11.427 00:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.427 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.427 ************************************ 00:08:11.427 END TEST unittest_vhost 00:08:11.427 ************************************ 00:08:11.427 00:27:44 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:11.427 00:27:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.427 00:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.427 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.427 ************************************ 00:08:11.427 START TEST unittest_dma 00:08:11.428 ************************************ 00:08:11.428 00:27:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:11.428 00:08:11.428 00:08:11.428 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.428 http://cunit.sourceforge.net/ 00:08:11.428 00:08:11.428 00:08:11.428 Suite: dma_suite 00:08:11.428 Test: test_dma ...[2024-04-27 00:27:44.888344] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:11.428 passed 00:08:11.428 00:08:11.428 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.428 suites 1 1 n/a 0 0 00:08:11.428 tests 1 1 1 0 0 00:08:11.428 asserts 54 54 54 0 n/a 00:08:11.428 00:08:11.428 Elapsed time = 0.000 seconds 00:08:11.428 00:08:11.428 real 0m0.029s 00:08:11.428 user 0m0.017s 00:08:11.428 sys 0m0.013s 00:08:11.428 00:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.428 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.428 ************************************ 00:08:11.428 END TEST unittest_dma 00:08:11.428 ************************************ 00:08:11.428 00:27:44 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:08:11.428 00:27:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.428 00:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.428 00:27:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.428 ************************************ 00:08:11.428 START TEST unittest_init 00:08:11.428 ************************************ 00:08:11.428 00:27:44 -- common/autotest_common.sh@1111 -- # unittest_init 00:08:11.428 00:27:44 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:11.428 00:08:11.428 00:08:11.428 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.428 http://cunit.sourceforge.net/ 00:08:11.428 00:08:11.428 00:08:11.428 Suite: subsystem_suite 00:08:11.428 Test: subsystem_sort_test_depends_on_single ...passed 00:08:11.428 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:11.428 Test: subsystem_sort_test_missing_dependency ...[2024-04-27 00:27:45.009536] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:11.428 passed 00:08:11.428 00:08:11.428 [2024-04-27 00:27:45.009871] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:11.428 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.428 suites 1 1 n/a 0 0 00:08:11.428 tests 3 3 3 0 0 00:08:11.428 asserts 20 20 20 0 n/a 00:08:11.428 00:08:11.428 Elapsed time = 0.001 seconds 00:08:11.687 00:08:11.687 real 0m0.037s 00:08:11.687 user 0m0.022s 00:08:11.687 sys 0m0.016s 00:08:11.687 00:27:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.687 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:11.687 ************************************ 00:08:11.687 END TEST unittest_init 00:08:11.687 ************************************ 00:08:11.687 00:27:45 -- unit/unittest.sh@288 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:11.687 00:27:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.687 00:27:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.687 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:11.687 ************************************ 00:08:11.687 START TEST unittest_keyring 00:08:11.687 ************************************ 00:08:11.687 00:27:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:11.687 00:08:11.687 00:08:11.687 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.687 http://cunit.sourceforge.net/ 00:08:11.687 00:08:11.687 00:08:11.687 Suite: keyring 00:08:11.687 Test: test_keyring_add_remove ...[2024-04-27 00:27:45.126727] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:11.687 [2024-04-27 00:27:45.127024] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:11.687 passed 00:08:11.687 Test: test_keyring_get_put ...passed 00:08:11.687 00:08:11.687 [2024-04-27 00:27:45.127110] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:11.687 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.687 suites 1 1 n/a 0 0 00:08:11.687 tests 2 2 2 0 0 00:08:11.687 asserts 44 44 44 0 n/a 00:08:11.687 00:08:11.687 Elapsed time = 0.001 seconds 00:08:11.687 00:08:11.687 real 0m0.032s 00:08:11.687 user 0m0.017s 00:08:11.687 sys 0m0.016s 00:08:11.687 00:27:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.687 00:27:45 -- common/autotest_common.sh@10 -- # set +x 00:08:11.687 ************************************ 00:08:11.687 END TEST unittest_keyring 00:08:11.687 ************************************ 00:08:11.687 00:27:45 -- unit/unittest.sh@290 -- # '[' yes = yes ']' 00:08:11.687 00:27:45 -- unit/unittest.sh@290 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:11.687 00:27:45 -- unit/unittest.sh@291 -- # hostname 00:08:11.687 00:27:45 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:11.944 geninfo: WARNING: invalid characters removed from testname! 00:08:44.026 00:28:12 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:44.026 00:28:17 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:47.339 00:28:20 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:50.624 00:28:23 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:53.904 00:28:26 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:56.437 00:28:29 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:58.966 00:28:32 -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:01.498 00:28:34 -- unit/unittest.sh@299 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:01.498 00:28:34 -- unit/unittest.sh@300 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:02.067 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:02.067 Found 316 entries. 00:09:02.067 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:02.067 Writing .css and .png files. 00:09:02.067 Generating output. 00:09:02.067 Processing file include/linux/virtio_ring.h 00:09:02.326 Processing file include/spdk/base64.h 00:09:02.326 Processing file include/spdk/nvme_spec.h 00:09:02.326 Processing file include/spdk/nvmf_transport.h 00:09:02.326 Processing file include/spdk/trace.h 00:09:02.326 Processing file include/spdk/nvme.h 00:09:02.326 Processing file include/spdk/histogram_data.h 00:09:02.326 Processing file include/spdk/util.h 00:09:02.326 Processing file include/spdk/endian.h 00:09:02.326 Processing file include/spdk/thread.h 00:09:02.326 Processing file include/spdk/mmio.h 00:09:02.326 Processing file include/spdk/bdev_module.h 00:09:02.585 Processing file include/spdk_internal/virtio.h 00:09:02.585 Processing file include/spdk_internal/rdma.h 00:09:02.585 Processing file include/spdk_internal/sock.h 00:09:02.585 Processing file include/spdk_internal/nvme_tcp.h 00:09:02.585 Processing file include/spdk_internal/utf.h 00:09:02.585 Processing file include/spdk_internal/sgl.h 00:09:02.585 Processing file lib/accel/accel.c 00:09:02.585 Processing file lib/accel/accel_rpc.c 00:09:02.585 Processing file lib/accel/accel_sw.c 00:09:02.843 Processing file lib/bdev/bdev_zone.c 00:09:02.843 Processing file lib/bdev/bdev.c 00:09:02.843 Processing file lib/bdev/scsi_nvme.c 00:09:02.843 Processing file lib/bdev/part.c 00:09:02.843 Processing file lib/bdev/bdev_rpc.c 00:09:03.411 Processing file lib/blob/blobstore.c 00:09:03.411 Processing file lib/blob/blobstore.h 00:09:03.411 Processing file lib/blob/request.c 00:09:03.411 Processing file lib/blob/blob_bs_dev.c 00:09:03.411 Processing file lib/blob/zeroes.c 00:09:03.411 Processing file lib/blobfs/blobfs.c 00:09:03.411 Processing file lib/blobfs/tree.c 00:09:03.411 Processing file lib/conf/conf.c 00:09:03.411 Processing file lib/dma/dma.c 00:09:03.979 Processing file lib/env_dpdk/init.c 00:09:03.979 Processing file lib/env_dpdk/pci_vmd.c 00:09:03.979 Processing file lib/env_dpdk/threads.c 00:09:03.979 Processing file lib/env_dpdk/env.c 00:09:03.979 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:03.979 Processing file lib/env_dpdk/pci_virtio.c 00:09:03.979 Processing file lib/env_dpdk/pci.c 00:09:03.979 Processing file lib/env_dpdk/pci_event.c 00:09:03.979 Processing file lib/env_dpdk/sigbus_handler.c 00:09:03.979 Processing file lib/env_dpdk/pci_dpdk.c 00:09:03.979 Processing file lib/env_dpdk/pci_ioat.c 00:09:03.979 Processing file lib/env_dpdk/pci_idxd.c 00:09:03.979 Processing file lib/env_dpdk/memory.c 00:09:03.979 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:03.979 Processing file lib/event/scheduler_static.c 00:09:03.979 Processing file lib/event/log_rpc.c 00:09:03.979 Processing file lib/event/app.c 00:09:03.979 Processing file lib/event/app_rpc.c 00:09:03.979 Processing file lib/event/reactor.c 00:09:04.546 Processing file lib/ftl/ftl_init.c 00:09:04.546 Processing file lib/ftl/ftl_debug.c 00:09:04.546 Processing file lib/ftl/ftl_reloc.c 00:09:04.546 Processing file lib/ftl/ftl_io.h 00:09:04.546 Processing file lib/ftl/ftl_rq.c 00:09:04.546 Processing file lib/ftl/ftl_core.h 00:09:04.546 Processing file lib/ftl/ftl_trace.c 00:09:04.546 Processing file lib/ftl/ftl_band.h 00:09:04.546 Processing file lib/ftl/ftl_band_ops.c 00:09:04.546 Processing file lib/ftl/ftl_l2p_flat.c 00:09:04.546 Processing file lib/ftl/ftl_p2l.c 00:09:04.546 Processing file lib/ftl/ftl_band.c 00:09:04.546 Processing file lib/ftl/ftl_l2p_cache.c 00:09:04.546 Processing file lib/ftl/ftl_debug.h 00:09:04.546 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:04.546 Processing file lib/ftl/ftl_io.c 00:09:04.546 Processing file lib/ftl/ftl_writer.c 00:09:04.546 Processing file lib/ftl/ftl_nv_cache.c 00:09:04.546 Processing file lib/ftl/ftl_nv_cache.h 00:09:04.546 Processing file lib/ftl/ftl_layout.c 00:09:04.546 Processing file lib/ftl/ftl_l2p.c 00:09:04.546 Processing file lib/ftl/ftl_writer.h 00:09:04.546 Processing file lib/ftl/ftl_core.c 00:09:04.546 Processing file lib/ftl/ftl_sb.c 00:09:04.546 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:04.546 Processing file lib/ftl/base/ftl_base_dev.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:04.805 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:05.063 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:05.063 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:05.063 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:05.063 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:05.063 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:05.063 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:05.322 Processing file lib/ftl/utils/ftl_property.c 00:09:05.322 Processing file lib/ftl/utils/ftl_property.h 00:09:05.322 Processing file lib/ftl/utils/ftl_conf.c 00:09:05.322 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:05.322 Processing file lib/ftl/utils/ftl_md.c 00:09:05.322 Processing file lib/ftl/utils/ftl_mempool.c 00:09:05.322 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:05.322 Processing file lib/ftl/utils/ftl_df.h 00:09:05.322 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:05.322 Processing file lib/idxd/idxd_user.c 00:09:05.322 Processing file lib/idxd/idxd.c 00:09:05.322 Processing file lib/idxd/idxd_internal.h 00:09:05.580 Processing file lib/init/subsystem.c 00:09:05.580 Processing file lib/init/subsystem_rpc.c 00:09:05.580 Processing file lib/init/rpc.c 00:09:05.580 Processing file lib/init/json_config.c 00:09:05.580 Processing file lib/ioat/ioat.c 00:09:05.580 Processing file lib/ioat/ioat_internal.h 00:09:05.839 Processing file lib/iscsi/portal_grp.c 00:09:05.839 Processing file lib/iscsi/tgt_node.c 00:09:05.839 Processing file lib/iscsi/task.c 00:09:05.839 Processing file lib/iscsi/task.h 00:09:05.839 Processing file lib/iscsi/iscsi_subsystem.c 00:09:05.839 Processing file lib/iscsi/iscsi.h 00:09:05.839 Processing file lib/iscsi/param.c 00:09:05.839 Processing file lib/iscsi/iscsi.c 00:09:05.839 Processing file lib/iscsi/md5.c 00:09:05.839 Processing file lib/iscsi/init_grp.c 00:09:05.839 Processing file lib/iscsi/conn.c 00:09:05.839 Processing file lib/iscsi/iscsi_rpc.c 00:09:06.105 Processing file lib/json/json_parse.c 00:09:06.105 Processing file lib/json/json_write.c 00:09:06.105 Processing file lib/json/json_util.c 00:09:06.105 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:06.105 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:06.105 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:06.105 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:06.378 Processing file lib/keyring/keyring.c 00:09:06.378 Processing file lib/keyring/keyring_rpc.c 00:09:06.378 Processing file lib/log/log.c 00:09:06.378 Processing file lib/log/log_flags.c 00:09:06.378 Processing file lib/log/log_deprecated.c 00:09:06.378 Processing file lib/lvol/lvol.c 00:09:06.637 Processing file lib/nbd/nbd_rpc.c 00:09:06.637 Processing file lib/nbd/nbd.c 00:09:06.637 Processing file lib/notify/notify.c 00:09:06.637 Processing file lib/notify/notify_rpc.c 00:09:07.573 Processing file lib/nvme/nvme_qpair.c 00:09:07.573 Processing file lib/nvme/nvme_poll_group.c 00:09:07.573 Processing file lib/nvme/nvme_internal.h 00:09:07.573 Processing file lib/nvme/nvme_discovery.c 00:09:07.573 Processing file lib/nvme/nvme_quirks.c 00:09:07.573 Processing file lib/nvme/nvme.c 00:09:07.573 Processing file lib/nvme/nvme_io_msg.c 00:09:07.573 Processing file lib/nvme/nvme_ctrlr.c 00:09:07.573 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:07.573 Processing file lib/nvme/nvme_ns.c 00:09:07.573 Processing file lib/nvme/nvme_cuse.c 00:09:07.573 Processing file lib/nvme/nvme_auth.c 00:09:07.573 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:07.573 Processing file lib/nvme/nvme_rdma.c 00:09:07.573 Processing file lib/nvme/nvme_tcp.c 00:09:07.573 Processing file lib/nvme/nvme_ns_cmd.c 00:09:07.573 Processing file lib/nvme/nvme_zns.c 00:09:07.573 Processing file lib/nvme/nvme_opal.c 00:09:07.573 Processing file lib/nvme/nvme_pcie_common.c 00:09:07.573 Processing file lib/nvme/nvme_pcie.c 00:09:07.573 Processing file lib/nvme/nvme_fabric.c 00:09:07.573 Processing file lib/nvme/nvme_transport.c 00:09:07.573 Processing file lib/nvme/nvme_pcie_internal.h 00:09:07.573 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:07.832 Processing file lib/nvmf/ctrlr_bdev.c 00:09:07.832 Processing file lib/nvmf/nvmf_rpc.c 00:09:07.832 Processing file lib/nvmf/nvmf.c 00:09:07.832 Processing file lib/nvmf/subsystem.c 00:09:07.832 Processing file lib/nvmf/ctrlr.c 00:09:07.832 Processing file lib/nvmf/tcp.c 00:09:07.832 Processing file lib/nvmf/nvmf_internal.h 00:09:07.832 Processing file lib/nvmf/ctrlr_discovery.c 00:09:07.832 Processing file lib/nvmf/transport.c 00:09:07.832 Processing file lib/nvmf/rdma.c 00:09:07.832 Processing file lib/rdma/common.c 00:09:07.832 Processing file lib/rdma/rdma_verbs.c 00:09:08.090 Processing file lib/rpc/rpc.c 00:09:08.348 Processing file lib/scsi/port.c 00:09:08.349 Processing file lib/scsi/scsi_pr.c 00:09:08.349 Processing file lib/scsi/scsi_bdev.c 00:09:08.349 Processing file lib/scsi/scsi.c 00:09:08.349 Processing file lib/scsi/lun.c 00:09:08.349 Processing file lib/scsi/task.c 00:09:08.349 Processing file lib/scsi/dev.c 00:09:08.349 Processing file lib/scsi/scsi_rpc.c 00:09:08.349 Processing file lib/sock/sock_rpc.c 00:09:08.349 Processing file lib/sock/sock.c 00:09:08.349 Processing file lib/thread/iobuf.c 00:09:08.349 Processing file lib/thread/thread.c 00:09:08.607 Processing file lib/trace/trace_rpc.c 00:09:08.607 Processing file lib/trace/trace.c 00:09:08.607 Processing file lib/trace/trace_flags.c 00:09:08.607 Processing file lib/trace_parser/trace.cpp 00:09:08.607 Processing file lib/ut/ut.c 00:09:08.865 Processing file lib/ut_mock/mock.c 00:09:09.125 Processing file lib/util/dif.c 00:09:09.125 Processing file lib/util/strerror_tls.c 00:09:09.125 Processing file lib/util/fd.c 00:09:09.125 Processing file lib/util/math.c 00:09:09.125 Processing file lib/util/bit_array.c 00:09:09.125 Processing file lib/util/zipf.c 00:09:09.125 Processing file lib/util/fd_group.c 00:09:09.125 Processing file lib/util/pipe.c 00:09:09.125 Processing file lib/util/cpuset.c 00:09:09.125 Processing file lib/util/base64.c 00:09:09.125 Processing file lib/util/string.c 00:09:09.125 Processing file lib/util/file.c 00:09:09.125 Processing file lib/util/crc32_ieee.c 00:09:09.125 Processing file lib/util/uuid.c 00:09:09.125 Processing file lib/util/xor.c 00:09:09.125 Processing file lib/util/crc16.c 00:09:09.125 Processing file lib/util/crc32.c 00:09:09.125 Processing file lib/util/crc64.c 00:09:09.125 Processing file lib/util/hexlify.c 00:09:09.125 Processing file lib/util/crc32c.c 00:09:09.125 Processing file lib/util/iov.c 00:09:09.125 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:09.125 Processing file lib/vfio_user/host/vfio_user.c 00:09:09.384 Processing file lib/vhost/vhost_blk.c 00:09:09.384 Processing file lib/vhost/vhost_rpc.c 00:09:09.384 Processing file lib/vhost/vhost_scsi.c 00:09:09.384 Processing file lib/vhost/vhost.c 00:09:09.384 Processing file lib/vhost/rte_vhost_user.c 00:09:09.384 Processing file lib/vhost/vhost_internal.h 00:09:09.642 Processing file lib/virtio/virtio_vhost_user.c 00:09:09.642 Processing file lib/virtio/virtio_pci.c 00:09:09.642 Processing file lib/virtio/virtio.c 00:09:09.642 Processing file lib/virtio/virtio_vfio_user.c 00:09:09.642 Processing file lib/vmd/vmd.c 00:09:09.642 Processing file lib/vmd/led.c 00:09:09.642 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:09.642 Processing file module/accel/dsa/accel_dsa.c 00:09:09.900 Processing file module/accel/error/accel_error.c 00:09:09.900 Processing file module/accel/error/accel_error_rpc.c 00:09:09.900 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:09.900 Processing file module/accel/iaa/accel_iaa.c 00:09:09.900 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:09.900 Processing file module/accel/ioat/accel_ioat.c 00:09:10.158 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:10.158 Processing file module/bdev/aio/bdev_aio.c 00:09:10.158 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:10.158 Processing file module/bdev/delay/vbdev_delay.c 00:09:10.158 Processing file module/bdev/error/vbdev_error.c 00:09:10.158 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:10.416 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:10.416 Processing file module/bdev/ftl/bdev_ftl.c 00:09:10.416 Processing file module/bdev/gpt/gpt.c 00:09:10.416 Processing file module/bdev/gpt/gpt.h 00:09:10.416 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:10.416 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:10.416 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:10.675 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:10.675 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:10.675 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:10.675 Processing file module/bdev/malloc/bdev_malloc.c 00:09:10.675 Processing file module/bdev/null/bdev_null.c 00:09:10.675 Processing file module/bdev/null/bdev_null_rpc.c 00:09:11.243 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:11.243 Processing file module/bdev/nvme/vbdev_opal.c 00:09:11.243 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:11.243 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:11.243 Processing file module/bdev/nvme/nvme_rpc.c 00:09:11.243 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:11.243 Processing file module/bdev/nvme/bdev_nvme.c 00:09:11.243 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:11.243 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:11.500 Processing file module/bdev/raid/bdev_raid.c 00:09:11.500 Processing file module/bdev/raid/raid5f.c 00:09:11.500 Processing file module/bdev/raid/raid0.c 00:09:11.500 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:11.500 Processing file module/bdev/raid/bdev_raid.h 00:09:11.500 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:11.500 Processing file module/bdev/raid/concat.c 00:09:11.500 Processing file module/bdev/raid/raid1.c 00:09:11.500 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:11.500 Processing file module/bdev/split/vbdev_split.c 00:09:11.500 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:11.500 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:11.500 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:11.757 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:11.757 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:11.757 Processing file module/blob/bdev/blob_bdev.c 00:09:11.757 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:11.757 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:12.014 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:12.014 Processing file module/event/subsystems/accel/accel.c 00:09:12.014 Processing file module/event/subsystems/bdev/bdev.c 00:09:12.014 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:12.014 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:12.271 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:12.271 Processing file module/event/subsystems/keyring/keyring.c 00:09:12.271 Processing file module/event/subsystems/nbd/nbd.c 00:09:12.529 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:12.529 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:12.529 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:12.529 Processing file module/event/subsystems/scsi/scsi.c 00:09:12.529 Processing file module/event/subsystems/sock/sock.c 00:09:12.786 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:12.786 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:12.786 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:12.786 Processing file module/event/subsystems/vmd/vmd.c 00:09:12.786 Processing file module/keyring/file/keyring.c 00:09:12.786 Processing file module/keyring/file/keyring_rpc.c 00:09:13.044 Processing file module/keyring/linux/keyring.c 00:09:13.044 Processing file module/keyring/linux/keyring_rpc.c 00:09:13.044 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:13.044 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:13.044 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:13.303 Processing file module/sock/sock_kernel.h 00:09:13.303 Processing file module/sock/posix/posix.c 00:09:13.303 Writing directory view page. 00:09:13.303 Overall coverage rate: 00:09:13.303 lines......: 38.9% (39969 of 102658 lines) 00:09:13.303 functions..: 42.6% (3651 of 8567 functions) 00:09:13.303 00:09:13.303 00:09:13.303 ===================== 00:09:13.303 All unit tests passed 00:09:13.303 ===================== 00:09:13.303 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:13.303 00:28:46 -- unit/unittest.sh@303 -- # set +x 00:09:13.303 00:09:13.303 00:09:13.303 00:09:13.303 real 3m9.708s 00:09:13.303 user 2m42.730s 00:09:13.303 sys 0m16.482s 00:09:13.303 00:28:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.303 00:28:46 -- common/autotest_common.sh@10 -- # set +x 00:09:13.303 ************************************ 00:09:13.303 END TEST unittest 00:09:13.303 ************************************ 00:09:13.303 00:28:46 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:13.303 00:28:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:13.303 00:28:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:13.303 00:28:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:13.303 00:28:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:13.303 00:28:46 -- common/autotest_common.sh@10 -- # set +x 00:09:13.303 00:28:46 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:13.303 00:28:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.303 00:28:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.303 00:28:46 -- common/autotest_common.sh@10 -- # set +x 00:09:13.561 ************************************ 00:09:13.561 START TEST env 00:09:13.561 ************************************ 00:09:13.561 00:28:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:13.561 * Looking for test storage... 00:09:13.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:13.561 00:28:46 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:13.561 00:28:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.561 00:28:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.561 00:28:46 -- common/autotest_common.sh@10 -- # set +x 00:09:13.561 ************************************ 00:09:13.561 START TEST env_memory 00:09:13.561 ************************************ 00:09:13.561 00:28:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:13.561 00:09:13.561 00:09:13.561 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.561 http://cunit.sourceforge.net/ 00:09:13.561 00:09:13.561 00:09:13.561 Suite: memory 00:09:13.561 Test: alloc and free memory map ...[2024-04-27 00:28:47.085794] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:13.561 passed 00:09:13.561 Test: mem map translation ...[2024-04-27 00:28:47.135077] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:13.561 [2024-04-27 00:28:47.135190] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:13.561 [2024-04-27 00:28:47.135299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:13.561 [2024-04-27 00:28:47.135400] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:13.820 passed 00:09:13.820 Test: mem map registration ...[2024-04-27 00:28:47.221985] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:13.820 [2024-04-27 00:28:47.222097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:13.820 passed 00:09:13.820 Test: mem map adjacent registrations ...passed 00:09:13.820 00:09:13.820 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.820 suites 1 1 n/a 0 0 00:09:13.820 tests 4 4 4 0 0 00:09:13.820 asserts 152 152 152 0 n/a 00:09:13.820 00:09:13.820 Elapsed time = 0.300 seconds 00:09:13.820 00:09:13.820 real 0m0.330s 00:09:13.820 user 0m0.322s 00:09:13.820 sys 0m0.008s 00:09:13.820 00:28:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.820 00:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:13.820 ************************************ 00:09:13.820 END TEST env_memory 00:09:13.820 ************************************ 00:09:13.820 00:28:47 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:13.820 00:28:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.820 00:28:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.820 00:28:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.079 ************************************ 00:09:14.079 START TEST env_vtophys 00:09:14.079 ************************************ 00:09:14.079 00:28:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:14.079 EAL: lib.eal log level changed from notice to debug 00:09:14.079 EAL: Detected lcore 0 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 1 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 2 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 3 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 4 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 5 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 6 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 7 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 8 as core 0 on socket 0 00:09:14.079 EAL: Detected lcore 9 as core 0 on socket 0 00:09:14.079 EAL: Maximum logical cores by configuration: 128 00:09:14.079 EAL: Detected CPU lcores: 10 00:09:14.079 EAL: Detected NUMA nodes: 1 00:09:14.079 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:09:14.079 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:14.079 EAL: Checking presence of .so 'librte_eal.so' 00:09:14.079 EAL: Detected static linkage of DPDK 00:09:14.079 EAL: No shared files mode enabled, IPC will be disabled 00:09:14.079 EAL: Selected IOVA mode 'PA' 00:09:14.079 EAL: Probing VFIO support... 00:09:14.079 EAL: IOMMU type 1 (Type 1) is supported 00:09:14.079 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:14.079 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:14.079 EAL: VFIO support initialized 00:09:14.079 EAL: Ask a virtual area of 0x2e000 bytes 00:09:14.079 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:14.079 EAL: Setting up physically contiguous memory... 00:09:14.079 EAL: Setting maximum number of open files to 1048576 00:09:14.079 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:14.079 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:14.079 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.079 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:14.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.079 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.079 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:14.079 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:14.079 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.079 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:14.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.079 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.079 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:14.079 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:14.079 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.079 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:14.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.079 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.079 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:14.079 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:14.079 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.079 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:14.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.079 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.079 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:14.079 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:14.079 EAL: Hugepages will be freed exactly as allocated. 00:09:14.079 EAL: No shared files mode enabled, IPC is disabled 00:09:14.079 EAL: No shared files mode enabled, IPC is disabled 00:09:14.079 EAL: TSC frequency is ~2200000 KHz 00:09:14.079 EAL: Main lcore 0 is ready (tid=7f5f5583ea80;cpuset=[0]) 00:09:14.079 EAL: Trying to obtain current memory policy. 00:09:14.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.079 EAL: Restoring previous memory policy: 0 00:09:14.079 EAL: request: mp_malloc_sync 00:09:14.079 EAL: No shared files mode enabled, IPC is disabled 00:09:14.079 EAL: Heap on socket 0 was expanded by 2MB 00:09:14.079 EAL: No shared files mode enabled, IPC is disabled 00:09:14.079 EAL: Mem event callback 'spdk:(nil)' registered 00:09:14.079 00:09:14.079 00:09:14.079 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.079 http://cunit.sourceforge.net/ 00:09:14.079 00:09:14.079 00:09:14.079 Suite: components_suite 00:09:14.647 Test: vtophys_malloc_test ...passed 00:09:14.647 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:14.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.647 EAL: Restoring previous memory policy: 0 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was expanded by 4MB 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was shrunk by 4MB 00:09:14.647 EAL: Trying to obtain current memory policy. 00:09:14.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.647 EAL: Restoring previous memory policy: 0 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was expanded by 6MB 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was shrunk by 6MB 00:09:14.647 EAL: Trying to obtain current memory policy. 00:09:14.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.647 EAL: Restoring previous memory policy: 0 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was expanded by 10MB 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was shrunk by 10MB 00:09:14.647 EAL: Trying to obtain current memory policy. 00:09:14.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.647 EAL: Restoring previous memory policy: 0 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was expanded by 18MB 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was shrunk by 18MB 00:09:14.647 EAL: Trying to obtain current memory policy. 00:09:14.647 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.647 EAL: Restoring previous memory policy: 0 00:09:14.647 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.647 EAL: request: mp_malloc_sync 00:09:14.647 EAL: No shared files mode enabled, IPC is disabled 00:09:14.647 EAL: Heap on socket 0 was expanded by 34MB 00:09:14.908 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.908 EAL: request: mp_malloc_sync 00:09:14.908 EAL: No shared files mode enabled, IPC is disabled 00:09:14.908 EAL: Heap on socket 0 was shrunk by 34MB 00:09:14.908 EAL: Trying to obtain current memory policy. 00:09:14.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.908 EAL: Restoring previous memory policy: 0 00:09:14.909 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.909 EAL: request: mp_malloc_sync 00:09:14.909 EAL: No shared files mode enabled, IPC is disabled 00:09:14.909 EAL: Heap on socket 0 was expanded by 66MB 00:09:14.909 EAL: Calling mem event callback 'spdk:(nil)' 00:09:14.909 EAL: request: mp_malloc_sync 00:09:14.909 EAL: No shared files mode enabled, IPC is disabled 00:09:14.909 EAL: Heap on socket 0 was shrunk by 66MB 00:09:14.909 EAL: Trying to obtain current memory policy. 00:09:14.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.170 EAL: Restoring previous memory policy: 0 00:09:15.170 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.170 EAL: request: mp_malloc_sync 00:09:15.170 EAL: No shared files mode enabled, IPC is disabled 00:09:15.170 EAL: Heap on socket 0 was expanded by 130MB 00:09:15.170 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.170 EAL: request: mp_malloc_sync 00:09:15.170 EAL: No shared files mode enabled, IPC is disabled 00:09:15.170 EAL: Heap on socket 0 was shrunk by 130MB 00:09:15.429 EAL: Trying to obtain current memory policy. 00:09:15.429 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.429 EAL: Restoring previous memory policy: 0 00:09:15.429 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.429 EAL: request: mp_malloc_sync 00:09:15.429 EAL: No shared files mode enabled, IPC is disabled 00:09:15.429 EAL: Heap on socket 0 was expanded by 258MB 00:09:15.687 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.946 EAL: request: mp_malloc_sync 00:09:15.946 EAL: No shared files mode enabled, IPC is disabled 00:09:15.946 EAL: Heap on socket 0 was shrunk by 258MB 00:09:16.203 EAL: Trying to obtain current memory policy. 00:09:16.203 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:16.203 EAL: Restoring previous memory policy: 0 00:09:16.203 EAL: Calling mem event callback 'spdk:(nil)' 00:09:16.203 EAL: request: mp_malloc_sync 00:09:16.204 EAL: No shared files mode enabled, IPC is disabled 00:09:16.204 EAL: Heap on socket 0 was expanded by 514MB 00:09:17.139 EAL: Calling mem event callback 'spdk:(nil)' 00:09:17.139 EAL: request: mp_malloc_sync 00:09:17.139 EAL: No shared files mode enabled, IPC is disabled 00:09:17.139 EAL: Heap on socket 0 was shrunk by 514MB 00:09:17.708 EAL: Trying to obtain current memory policy. 00:09:17.708 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:17.967 EAL: Restoring previous memory policy: 0 00:09:17.967 EAL: Calling mem event callback 'spdk:(nil)' 00:09:17.967 EAL: request: mp_malloc_sync 00:09:17.967 EAL: No shared files mode enabled, IPC is disabled 00:09:17.967 EAL: Heap on socket 0 was expanded by 1026MB 00:09:19.344 EAL: Calling mem event callback 'spdk:(nil)' 00:09:19.602 EAL: request: mp_malloc_sync 00:09:19.602 EAL: No shared files mode enabled, IPC is disabled 00:09:19.602 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:21.505 passed 00:09:21.505 00:09:21.505 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.505 suites 1 1 n/a 0 0 00:09:21.505 tests 2 2 2 0 0 00:09:21.505 asserts 6349 6349 6349 0 n/a 00:09:21.505 00:09:21.505 Elapsed time = 6.948 seconds 00:09:21.505 EAL: Calling mem event callback 'spdk:(nil)' 00:09:21.505 EAL: request: mp_malloc_sync 00:09:21.505 EAL: No shared files mode enabled, IPC is disabled 00:09:21.505 EAL: Heap on socket 0 was shrunk by 2MB 00:09:21.505 EAL: No shared files mode enabled, IPC is disabled 00:09:21.505 EAL: No shared files mode enabled, IPC is disabled 00:09:21.505 EAL: No shared files mode enabled, IPC is disabled 00:09:21.505 00:09:21.505 real 0m7.250s 00:09:21.505 user 0m6.222s 00:09:21.505 sys 0m0.896s 00:09:21.505 00:28:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.505 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.505 ************************************ 00:09:21.505 END TEST env_vtophys 00:09:21.505 ************************************ 00:09:21.505 00:28:54 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:21.505 00:28:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.505 00:28:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.505 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.505 ************************************ 00:09:21.505 START TEST env_pci 00:09:21.505 ************************************ 00:09:21.505 00:28:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:21.505 00:09:21.505 00:09:21.505 CUnit - A unit testing framework for C - Version 2.1-3 00:09:21.505 http://cunit.sourceforge.net/ 00:09:21.505 00:09:21.505 00:09:21.505 Suite: pci 00:09:21.505 Test: pci_hook ...[2024-04-27 00:28:54.822961] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 110163 has claimed it 00:09:21.505 passed 00:09:21.505 00:09:21.505 Run Summary: Type Total Ran Passed Failed Inactive 00:09:21.505 suites 1 1 n/a 0 0 00:09:21.505 tests 1 1 1 0 0 00:09:21.505 asserts 25 25 25 0 n/a 00:09:21.505 00:09:21.505 Elapsed time = 0.004 seconds 00:09:21.505 EAL: Cannot find device (10000:00:01.0) 00:09:21.505 EAL: Failed to attach device on primary process 00:09:21.505 00:09:21.505 real 0m0.090s 00:09:21.505 user 0m0.064s 00:09:21.505 sys 0m0.026s 00:09:21.505 00:28:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.505 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.505 ************************************ 00:09:21.505 END TEST env_pci 00:09:21.505 ************************************ 00:09:21.505 00:28:54 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:21.505 00:28:54 -- env/env.sh@15 -- # uname 00:09:21.505 00:28:54 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:21.505 00:28:54 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:21.505 00:28:54 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:21.505 00:28:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:21.505 00:28:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.505 00:28:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.505 ************************************ 00:09:21.505 START TEST env_dpdk_post_init 00:09:21.505 ************************************ 00:09:21.505 00:28:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:21.505 EAL: Detected CPU lcores: 10 00:09:21.505 EAL: Detected NUMA nodes: 1 00:09:21.505 EAL: Detected static linkage of DPDK 00:09:21.505 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:21.505 EAL: Selected IOVA mode 'PA' 00:09:21.505 EAL: VFIO support initialized 00:09:21.765 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:21.765 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:21.765 Starting DPDK initialization... 00:09:21.765 Starting SPDK post initialization... 00:09:21.765 SPDK NVMe probe 00:09:21.765 Attaching to 0000:00:10.0 00:09:21.765 Attached to 0000:00:10.0 00:09:21.765 Cleaning up... 00:09:21.765 00:09:21.765 real 0m0.288s 00:09:21.765 user 0m0.091s 00:09:21.765 sys 0m0.099s 00:09:21.765 00:28:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.765 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.765 ************************************ 00:09:21.765 END TEST env_dpdk_post_init 00:09:21.765 ************************************ 00:09:21.765 00:28:55 -- env/env.sh@26 -- # uname 00:09:21.765 00:28:55 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:21.765 00:28:55 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:21.765 00:28:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.765 00:28:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.765 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:21.765 ************************************ 00:09:21.765 START TEST env_mem_callbacks 00:09:21.765 ************************************ 00:09:21.765 00:28:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:22.024 EAL: Detected CPU lcores: 10 00:09:22.024 EAL: Detected NUMA nodes: 1 00:09:22.024 EAL: Detected static linkage of DPDK 00:09:22.024 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:22.024 EAL: Selected IOVA mode 'PA' 00:09:22.024 EAL: VFIO support initialized 00:09:22.024 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:22.024 00:09:22.024 00:09:22.024 CUnit - A unit testing framework for C - Version 2.1-3 00:09:22.024 http://cunit.sourceforge.net/ 00:09:22.024 00:09:22.024 00:09:22.024 Suite: memory 00:09:22.024 Test: test ... 00:09:22.024 register 0x200000200000 2097152 00:09:22.024 malloc 3145728 00:09:22.024 register 0x200000400000 4194304 00:09:22.024 buf 0x2000004fffc0 len 3145728 PASSED 00:09:22.024 malloc 64 00:09:22.024 buf 0x2000004ffec0 len 64 PASSED 00:09:22.024 malloc 4194304 00:09:22.024 register 0x200000800000 6291456 00:09:22.024 buf 0x2000009fffc0 len 4194304 PASSED 00:09:22.024 free 0x2000004fffc0 3145728 00:09:22.024 free 0x2000004ffec0 64 00:09:22.024 unregister 0x200000400000 4194304 PASSED 00:09:22.024 free 0x2000009fffc0 4194304 00:09:22.024 unregister 0x200000800000 6291456 PASSED 00:09:22.024 malloc 8388608 00:09:22.024 register 0x200000400000 10485760 00:09:22.024 buf 0x2000005fffc0 len 8388608 PASSED 00:09:22.024 free 0x2000005fffc0 8388608 00:09:22.024 unregister 0x200000400000 10485760 PASSED 00:09:22.024 passed 00:09:22.024 00:09:22.024 Run Summary: Type Total Ran Passed Failed Inactive 00:09:22.024 suites 1 1 n/a 0 0 00:09:22.024 tests 1 1 1 0 0 00:09:22.024 asserts 15 15 15 0 n/a 00:09:22.024 00:09:22.024 Elapsed time = 0.068 seconds 00:09:22.283 00:09:22.283 real 0m0.301s 00:09:22.283 user 0m0.112s 00:09:22.283 sys 0m0.089s 00:09:22.283 00:28:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.283 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:22.283 ************************************ 00:09:22.283 END TEST env_mem_callbacks 00:09:22.283 ************************************ 00:09:22.283 00:09:22.283 real 0m8.787s 00:09:22.283 user 0m7.067s 00:09:22.283 sys 0m1.383s 00:09:22.283 00:28:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.283 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:22.283 ************************************ 00:09:22.283 END TEST env 00:09:22.283 ************************************ 00:09:22.283 00:28:55 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:22.283 00:28:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.283 00:28:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.283 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:22.283 ************************************ 00:09:22.283 START TEST rpc 00:09:22.283 ************************************ 00:09:22.283 00:28:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:22.283 * Looking for test storage... 00:09:22.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:22.283 00:28:55 -- rpc/rpc.sh@65 -- # spdk_pid=110308 00:09:22.283 00:28:55 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:22.283 00:28:55 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.283 00:28:55 -- rpc/rpc.sh@67 -- # waitforlisten 110308 00:09:22.283 00:28:55 -- common/autotest_common.sh@817 -- # '[' -z 110308 ']' 00:09:22.283 00:28:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.283 00:28:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:22.283 00:28:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.283 00:28:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:22.283 00:28:55 -- common/autotest_common.sh@10 -- # set +x 00:09:22.542 [2024-04-27 00:28:55.947682] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:22.542 [2024-04-27 00:28:55.947897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110308 ] 00:09:22.542 [2024-04-27 00:28:56.116488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.801 [2024-04-27 00:28:56.296785] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:22.801 [2024-04-27 00:28:56.296906] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110308' to capture a snapshot of events at runtime. 00:09:22.801 [2024-04-27 00:28:56.296955] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.801 [2024-04-27 00:28:56.296975] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.801 [2024-04-27 00:28:56.297023] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110308 for offline analysis/debug. 00:09:22.801 [2024-04-27 00:28:56.297086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.814 00:28:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:23.814 00:28:57 -- common/autotest_common.sh@850 -- # return 0 00:09:23.814 00:28:57 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:23.814 00:28:57 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:23.814 00:28:57 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:23.814 00:28:57 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:23.814 00:28:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.814 00:28:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.814 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.814 ************************************ 00:09:23.814 START TEST rpc_integrity 00:09:23.814 ************************************ 00:09:23.814 00:28:57 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:09:23.814 00:28:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:23.814 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.814 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.814 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.814 00:28:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:23.814 00:28:57 -- rpc/rpc.sh@13 -- # jq length 00:09:23.814 00:28:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:23.814 00:28:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:23.814 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.814 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.814 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.814 00:28:57 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:23.814 00:28:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:23.814 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.814 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.814 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.814 00:28:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:23.814 { 00:09:23.814 "name": "Malloc0", 00:09:23.814 "aliases": [ 00:09:23.814 "f81fda55-3afb-4004-ab5c-4f4f5b23c1b9" 00:09:23.814 ], 00:09:23.814 "product_name": "Malloc disk", 00:09:23.814 "block_size": 512, 00:09:23.814 "num_blocks": 16384, 00:09:23.814 "uuid": "f81fda55-3afb-4004-ab5c-4f4f5b23c1b9", 00:09:23.814 "assigned_rate_limits": { 00:09:23.814 "rw_ios_per_sec": 0, 00:09:23.814 "rw_mbytes_per_sec": 0, 00:09:23.814 "r_mbytes_per_sec": 0, 00:09:23.814 "w_mbytes_per_sec": 0 00:09:23.814 }, 00:09:23.814 "claimed": false, 00:09:23.814 "zoned": false, 00:09:23.814 "supported_io_types": { 00:09:23.814 "read": true, 00:09:23.814 "write": true, 00:09:23.814 "unmap": true, 00:09:23.814 "write_zeroes": true, 00:09:23.814 "flush": true, 00:09:23.814 "reset": true, 00:09:23.814 "compare": false, 00:09:23.814 "compare_and_write": false, 00:09:23.814 "abort": true, 00:09:23.814 "nvme_admin": false, 00:09:23.814 "nvme_io": false 00:09:23.814 }, 00:09:23.814 "memory_domains": [ 00:09:23.815 { 00:09:23.815 "dma_device_id": "system", 00:09:23.815 "dma_device_type": 1 00:09:23.815 }, 00:09:23.815 { 00:09:23.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.815 "dma_device_type": 2 00:09:23.815 } 00:09:23.815 ], 00:09:23.815 "driver_specific": {} 00:09:23.815 } 00:09:23.815 ]' 00:09:23.815 00:28:57 -- rpc/rpc.sh@17 -- # jq length 00:09:23.815 00:28:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:23.815 00:28:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:23.815 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.815 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.815 [2024-04-27 00:28:57.214593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:23.815 [2024-04-27 00:28:57.214708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.815 [2024-04-27 00:28:57.214765] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:23.815 [2024-04-27 00:28:57.214803] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.815 [2024-04-27 00:28:57.217741] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.815 [2024-04-27 00:28:57.217817] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:23.815 Passthru0 00:09:23.815 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.815 00:28:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:23.815 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.815 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.815 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.815 00:28:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:23.815 { 00:09:23.815 "name": "Malloc0", 00:09:23.815 "aliases": [ 00:09:23.815 "f81fda55-3afb-4004-ab5c-4f4f5b23c1b9" 00:09:23.815 ], 00:09:23.815 "product_name": "Malloc disk", 00:09:23.815 "block_size": 512, 00:09:23.815 "num_blocks": 16384, 00:09:23.815 "uuid": "f81fda55-3afb-4004-ab5c-4f4f5b23c1b9", 00:09:23.815 "assigned_rate_limits": { 00:09:23.815 "rw_ios_per_sec": 0, 00:09:23.815 "rw_mbytes_per_sec": 0, 00:09:23.815 "r_mbytes_per_sec": 0, 00:09:23.815 "w_mbytes_per_sec": 0 00:09:23.815 }, 00:09:23.815 "claimed": true, 00:09:23.815 "claim_type": "exclusive_write", 00:09:23.815 "zoned": false, 00:09:23.815 "supported_io_types": { 00:09:23.815 "read": true, 00:09:23.815 "write": true, 00:09:23.815 "unmap": true, 00:09:23.815 "write_zeroes": true, 00:09:23.815 "flush": true, 00:09:23.815 "reset": true, 00:09:23.815 "compare": false, 00:09:23.815 "compare_and_write": false, 00:09:23.815 "abort": true, 00:09:23.815 "nvme_admin": false, 00:09:23.815 "nvme_io": false 00:09:23.815 }, 00:09:23.815 "memory_domains": [ 00:09:23.815 { 00:09:23.815 "dma_device_id": "system", 00:09:23.815 "dma_device_type": 1 00:09:23.815 }, 00:09:23.815 { 00:09:23.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.815 "dma_device_type": 2 00:09:23.815 } 00:09:23.815 ], 00:09:23.815 "driver_specific": {} 00:09:23.815 }, 00:09:23.815 { 00:09:23.815 "name": "Passthru0", 00:09:23.815 "aliases": [ 00:09:23.815 "5f52b0d5-dc58-5245-81d2-3833077a639a" 00:09:23.815 ], 00:09:23.815 "product_name": "passthru", 00:09:23.815 "block_size": 512, 00:09:23.815 "num_blocks": 16384, 00:09:23.815 "uuid": "5f52b0d5-dc58-5245-81d2-3833077a639a", 00:09:23.815 "assigned_rate_limits": { 00:09:23.815 "rw_ios_per_sec": 0, 00:09:23.815 "rw_mbytes_per_sec": 0, 00:09:23.815 "r_mbytes_per_sec": 0, 00:09:23.815 "w_mbytes_per_sec": 0 00:09:23.815 }, 00:09:23.815 "claimed": false, 00:09:23.815 "zoned": false, 00:09:23.815 "supported_io_types": { 00:09:23.815 "read": true, 00:09:23.815 "write": true, 00:09:23.815 "unmap": true, 00:09:23.815 "write_zeroes": true, 00:09:23.815 "flush": true, 00:09:23.815 "reset": true, 00:09:23.815 "compare": false, 00:09:23.815 "compare_and_write": false, 00:09:23.815 "abort": true, 00:09:23.815 "nvme_admin": false, 00:09:23.815 "nvme_io": false 00:09:23.815 }, 00:09:23.815 "memory_domains": [ 00:09:23.815 { 00:09:23.815 "dma_device_id": "system", 00:09:23.815 "dma_device_type": 1 00:09:23.815 }, 00:09:23.815 { 00:09:23.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.815 "dma_device_type": 2 00:09:23.815 } 00:09:23.815 ], 00:09:23.815 "driver_specific": { 00:09:23.815 "passthru": { 00:09:23.815 "name": "Passthru0", 00:09:23.815 "base_bdev_name": "Malloc0" 00:09:23.815 } 00:09:23.815 } 00:09:23.815 } 00:09:23.815 ]' 00:09:23.815 00:28:57 -- rpc/rpc.sh@21 -- # jq length 00:09:23.815 00:28:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:23.815 00:28:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:23.815 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.815 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.815 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.815 00:28:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:23.815 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.815 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.815 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.815 00:28:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:23.815 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.815 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.815 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.815 00:28:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:23.815 00:28:57 -- rpc/rpc.sh@26 -- # jq length 00:09:23.815 00:28:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:23.815 00:09:23.815 real 0m0.337s 00:09:23.815 user 0m0.210s 00:09:23.815 sys 0m0.032s 00:09:23.815 00:28:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:23.815 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.815 ************************************ 00:09:23.815 END TEST rpc_integrity 00:09:23.815 ************************************ 00:09:24.075 00:28:57 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:24.075 00:28:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.075 00:28:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.075 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.075 ************************************ 00:09:24.075 START TEST rpc_plugins 00:09:24.075 ************************************ 00:09:24.075 00:28:57 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:09:24.075 00:28:57 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:24.075 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.075 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.075 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.075 00:28:57 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:24.075 00:28:57 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:24.075 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.075 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.075 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.075 00:28:57 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:24.075 { 00:09:24.075 "name": "Malloc1", 00:09:24.075 "aliases": [ 00:09:24.075 "7f2408bd-024f-4de2-bd21-47dace11269e" 00:09:24.075 ], 00:09:24.075 "product_name": "Malloc disk", 00:09:24.075 "block_size": 4096, 00:09:24.075 "num_blocks": 256, 00:09:24.075 "uuid": "7f2408bd-024f-4de2-bd21-47dace11269e", 00:09:24.075 "assigned_rate_limits": { 00:09:24.075 "rw_ios_per_sec": 0, 00:09:24.075 "rw_mbytes_per_sec": 0, 00:09:24.075 "r_mbytes_per_sec": 0, 00:09:24.075 "w_mbytes_per_sec": 0 00:09:24.075 }, 00:09:24.075 "claimed": false, 00:09:24.075 "zoned": false, 00:09:24.075 "supported_io_types": { 00:09:24.075 "read": true, 00:09:24.075 "write": true, 00:09:24.075 "unmap": true, 00:09:24.075 "write_zeroes": true, 00:09:24.075 "flush": true, 00:09:24.075 "reset": true, 00:09:24.075 "compare": false, 00:09:24.075 "compare_and_write": false, 00:09:24.075 "abort": true, 00:09:24.075 "nvme_admin": false, 00:09:24.075 "nvme_io": false 00:09:24.075 }, 00:09:24.075 "memory_domains": [ 00:09:24.075 { 00:09:24.075 "dma_device_id": "system", 00:09:24.075 "dma_device_type": 1 00:09:24.075 }, 00:09:24.075 { 00:09:24.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.075 "dma_device_type": 2 00:09:24.075 } 00:09:24.075 ], 00:09:24.075 "driver_specific": {} 00:09:24.075 } 00:09:24.075 ]' 00:09:24.075 00:28:57 -- rpc/rpc.sh@32 -- # jq length 00:09:24.075 00:28:57 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:24.075 00:28:57 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:24.075 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.075 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.075 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.075 00:28:57 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:24.075 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.075 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.075 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.075 00:28:57 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:24.075 00:28:57 -- rpc/rpc.sh@36 -- # jq length 00:09:24.075 00:28:57 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:24.075 00:09:24.075 real 0m0.158s 00:09:24.075 user 0m0.105s 00:09:24.075 sys 0m0.017s 00:09:24.075 00:28:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.075 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.075 ************************************ 00:09:24.075 END TEST rpc_plugins 00:09:24.075 ************************************ 00:09:24.334 00:28:57 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:24.334 00:28:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.334 00:28:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.334 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.334 ************************************ 00:09:24.334 START TEST rpc_trace_cmd_test 00:09:24.334 ************************************ 00:09:24.334 00:28:57 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:09:24.334 00:28:57 -- rpc/rpc.sh@40 -- # local info 00:09:24.334 00:28:57 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:24.334 00:28:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.334 00:28:57 -- common/autotest_common.sh@10 -- # set +x 00:09:24.334 00:28:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.334 00:28:57 -- rpc/rpc.sh@42 -- # info='{ 00:09:24.334 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110308", 00:09:24.334 "tpoint_group_mask": "0x8", 00:09:24.334 "iscsi_conn": { 00:09:24.334 "mask": "0x2", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "scsi": { 00:09:24.334 "mask": "0x4", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "bdev": { 00:09:24.334 "mask": "0x8", 00:09:24.334 "tpoint_mask": "0xffffffffffffffff" 00:09:24.334 }, 00:09:24.334 "nvmf_rdma": { 00:09:24.334 "mask": "0x10", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "nvmf_tcp": { 00:09:24.334 "mask": "0x20", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "ftl": { 00:09:24.334 "mask": "0x40", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "blobfs": { 00:09:24.334 "mask": "0x80", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "dsa": { 00:09:24.334 "mask": "0x200", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "thread": { 00:09:24.334 "mask": "0x400", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "nvme_pcie": { 00:09:24.334 "mask": "0x800", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "iaa": { 00:09:24.334 "mask": "0x1000", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "nvme_tcp": { 00:09:24.334 "mask": "0x2000", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "bdev_nvme": { 00:09:24.334 "mask": "0x4000", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 }, 00:09:24.334 "sock": { 00:09:24.334 "mask": "0x8000", 00:09:24.334 "tpoint_mask": "0x0" 00:09:24.334 } 00:09:24.334 }' 00:09:24.334 00:28:57 -- rpc/rpc.sh@43 -- # jq length 00:09:24.334 00:28:57 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:24.334 00:28:57 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:24.334 00:28:57 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:24.334 00:28:57 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:24.334 00:28:57 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:24.334 00:28:57 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:24.594 00:28:57 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:24.594 00:28:57 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:24.594 00:28:58 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:24.594 00:09:24.594 real 0m0.268s 00:09:24.594 user 0m0.244s 00:09:24.594 sys 0m0.018s 00:09:24.594 00:28:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.594 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.594 ************************************ 00:09:24.594 END TEST rpc_trace_cmd_test 00:09:24.594 ************************************ 00:09:24.594 00:28:58 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:24.594 00:28:58 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:24.594 00:28:58 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:24.594 00:28:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.594 00:28:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.594 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.594 ************************************ 00:09:24.594 START TEST rpc_daemon_integrity 00:09:24.594 ************************************ 00:09:24.594 00:28:58 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:09:24.594 00:28:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:24.594 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.594 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.594 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.594 00:28:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:24.594 00:28:58 -- rpc/rpc.sh@13 -- # jq length 00:09:24.594 00:28:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:24.594 00:28:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:24.594 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.594 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.594 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.594 00:28:58 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:24.594 00:28:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:24.594 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.594 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.853 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.853 00:28:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:24.853 { 00:09:24.853 "name": "Malloc2", 00:09:24.853 "aliases": [ 00:09:24.853 "f000814c-fc9d-4b49-9b90-5da303680be1" 00:09:24.853 ], 00:09:24.853 "product_name": "Malloc disk", 00:09:24.853 "block_size": 512, 00:09:24.853 "num_blocks": 16384, 00:09:24.853 "uuid": "f000814c-fc9d-4b49-9b90-5da303680be1", 00:09:24.853 "assigned_rate_limits": { 00:09:24.853 "rw_ios_per_sec": 0, 00:09:24.853 "rw_mbytes_per_sec": 0, 00:09:24.853 "r_mbytes_per_sec": 0, 00:09:24.853 "w_mbytes_per_sec": 0 00:09:24.853 }, 00:09:24.853 "claimed": false, 00:09:24.853 "zoned": false, 00:09:24.853 "supported_io_types": { 00:09:24.853 "read": true, 00:09:24.853 "write": true, 00:09:24.853 "unmap": true, 00:09:24.853 "write_zeroes": true, 00:09:24.853 "flush": true, 00:09:24.853 "reset": true, 00:09:24.853 "compare": false, 00:09:24.853 "compare_and_write": false, 00:09:24.853 "abort": true, 00:09:24.853 "nvme_admin": false, 00:09:24.853 "nvme_io": false 00:09:24.853 }, 00:09:24.853 "memory_domains": [ 00:09:24.853 { 00:09:24.853 "dma_device_id": "system", 00:09:24.853 "dma_device_type": 1 00:09:24.853 }, 00:09:24.853 { 00:09:24.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.853 "dma_device_type": 2 00:09:24.853 } 00:09:24.853 ], 00:09:24.853 "driver_specific": {} 00:09:24.853 } 00:09:24.853 ]' 00:09:24.853 00:28:58 -- rpc/rpc.sh@17 -- # jq length 00:09:24.853 00:28:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:24.853 00:28:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:24.853 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.853 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.853 [2024-04-27 00:28:58.249735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:24.853 [2024-04-27 00:28:58.249854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.853 [2024-04-27 00:28:58.249909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:24.853 [2024-04-27 00:28:58.249940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.853 [2024-04-27 00:28:58.252422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.853 [2024-04-27 00:28:58.252475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:24.853 Passthru0 00:09:24.853 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.853 00:28:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:24.853 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.853 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.853 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.853 00:28:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:24.853 { 00:09:24.853 "name": "Malloc2", 00:09:24.853 "aliases": [ 00:09:24.853 "f000814c-fc9d-4b49-9b90-5da303680be1" 00:09:24.853 ], 00:09:24.853 "product_name": "Malloc disk", 00:09:24.853 "block_size": 512, 00:09:24.853 "num_blocks": 16384, 00:09:24.853 "uuid": "f000814c-fc9d-4b49-9b90-5da303680be1", 00:09:24.853 "assigned_rate_limits": { 00:09:24.853 "rw_ios_per_sec": 0, 00:09:24.853 "rw_mbytes_per_sec": 0, 00:09:24.853 "r_mbytes_per_sec": 0, 00:09:24.853 "w_mbytes_per_sec": 0 00:09:24.853 }, 00:09:24.853 "claimed": true, 00:09:24.853 "claim_type": "exclusive_write", 00:09:24.853 "zoned": false, 00:09:24.853 "supported_io_types": { 00:09:24.853 "read": true, 00:09:24.853 "write": true, 00:09:24.853 "unmap": true, 00:09:24.853 "write_zeroes": true, 00:09:24.853 "flush": true, 00:09:24.853 "reset": true, 00:09:24.853 "compare": false, 00:09:24.853 "compare_and_write": false, 00:09:24.853 "abort": true, 00:09:24.853 "nvme_admin": false, 00:09:24.853 "nvme_io": false 00:09:24.853 }, 00:09:24.853 "memory_domains": [ 00:09:24.853 { 00:09:24.853 "dma_device_id": "system", 00:09:24.853 "dma_device_type": 1 00:09:24.853 }, 00:09:24.853 { 00:09:24.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.854 "dma_device_type": 2 00:09:24.854 } 00:09:24.854 ], 00:09:24.854 "driver_specific": {} 00:09:24.854 }, 00:09:24.854 { 00:09:24.854 "name": "Passthru0", 00:09:24.854 "aliases": [ 00:09:24.854 "80a62c9e-602b-5909-b7e9-7f3de946a8db" 00:09:24.854 ], 00:09:24.854 "product_name": "passthru", 00:09:24.854 "block_size": 512, 00:09:24.854 "num_blocks": 16384, 00:09:24.854 "uuid": "80a62c9e-602b-5909-b7e9-7f3de946a8db", 00:09:24.854 "assigned_rate_limits": { 00:09:24.854 "rw_ios_per_sec": 0, 00:09:24.854 "rw_mbytes_per_sec": 0, 00:09:24.854 "r_mbytes_per_sec": 0, 00:09:24.854 "w_mbytes_per_sec": 0 00:09:24.854 }, 00:09:24.854 "claimed": false, 00:09:24.854 "zoned": false, 00:09:24.854 "supported_io_types": { 00:09:24.854 "read": true, 00:09:24.854 "write": true, 00:09:24.854 "unmap": true, 00:09:24.854 "write_zeroes": true, 00:09:24.854 "flush": true, 00:09:24.854 "reset": true, 00:09:24.854 "compare": false, 00:09:24.854 "compare_and_write": false, 00:09:24.854 "abort": true, 00:09:24.854 "nvme_admin": false, 00:09:24.854 "nvme_io": false 00:09:24.854 }, 00:09:24.854 "memory_domains": [ 00:09:24.854 { 00:09:24.854 "dma_device_id": "system", 00:09:24.854 "dma_device_type": 1 00:09:24.854 }, 00:09:24.854 { 00:09:24.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.854 "dma_device_type": 2 00:09:24.854 } 00:09:24.854 ], 00:09:24.854 "driver_specific": { 00:09:24.854 "passthru": { 00:09:24.854 "name": "Passthru0", 00:09:24.854 "base_bdev_name": "Malloc2" 00:09:24.854 } 00:09:24.854 } 00:09:24.854 } 00:09:24.854 ]' 00:09:24.854 00:28:58 -- rpc/rpc.sh@21 -- # jq length 00:09:24.854 00:28:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:24.854 00:28:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:24.854 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.854 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.854 00:28:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:24.854 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.854 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.854 00:28:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:24.854 00:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.854 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 00:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.854 00:28:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:24.854 00:28:58 -- rpc/rpc.sh@26 -- # jq length 00:09:24.854 00:28:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:24.854 00:09:24.854 real 0m0.336s 00:09:24.854 user 0m0.211s 00:09:24.854 sys 0m0.032s 00:09:24.854 00:28:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:24.854 00:28:58 -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 ************************************ 00:09:24.854 END TEST rpc_daemon_integrity 00:09:24.854 ************************************ 00:09:25.113 00:28:58 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:25.113 00:28:58 -- rpc/rpc.sh@84 -- # killprocess 110308 00:09:25.113 00:28:58 -- common/autotest_common.sh@936 -- # '[' -z 110308 ']' 00:09:25.113 00:28:58 -- common/autotest_common.sh@940 -- # kill -0 110308 00:09:25.113 00:28:58 -- common/autotest_common.sh@941 -- # uname 00:09:25.113 00:28:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:25.113 00:28:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110308 00:09:25.113 00:28:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:25.113 00:28:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:25.113 00:28:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110308' 00:09:25.113 killing process with pid 110308 00:09:25.113 00:28:58 -- common/autotest_common.sh@955 -- # kill 110308 00:09:25.113 00:28:58 -- common/autotest_common.sh@960 -- # wait 110308 00:09:27.019 00:09:27.019 real 0m4.614s 00:09:27.019 user 0m5.446s 00:09:27.019 sys 0m0.807s 00:09:27.019 00:29:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:27.019 00:29:00 -- common/autotest_common.sh@10 -- # set +x 00:09:27.019 ************************************ 00:09:27.019 END TEST rpc 00:09:27.019 ************************************ 00:09:27.019 00:29:00 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:27.019 00:29:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:27.019 00:29:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.019 00:29:00 -- common/autotest_common.sh@10 -- # set +x 00:09:27.019 ************************************ 00:09:27.019 START TEST skip_rpc 00:09:27.019 ************************************ 00:09:27.019 00:29:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:27.019 * Looking for test storage... 00:09:27.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:27.019 00:29:00 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:27.019 00:29:00 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:27.019 00:29:00 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:27.019 00:29:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:27.019 00:29:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.019 00:29:00 -- common/autotest_common.sh@10 -- # set +x 00:09:27.019 ************************************ 00:09:27.019 START TEST skip_rpc 00:09:27.019 ************************************ 00:09:27.019 00:29:00 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:09:27.019 00:29:00 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=110574 00:09:27.019 00:29:00 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:27.019 00:29:00 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:27.019 00:29:00 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:27.278 [2024-04-27 00:29:00.686064] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:27.278 [2024-04-27 00:29:00.686303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110574 ] 00:09:27.278 [2024-04-27 00:29:00.852004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.537 [2024-04-27 00:29:01.047198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.809 00:29:05 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:32.809 00:29:05 -- common/autotest_common.sh@638 -- # local es=0 00:09:32.809 00:29:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:32.809 00:29:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:09:32.809 00:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:32.809 00:29:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:09:32.809 00:29:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:32.809 00:29:05 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:09:32.809 00:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:32.809 00:29:05 -- common/autotest_common.sh@10 -- # set +x 00:09:32.809 00:29:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:32.809 00:29:05 -- common/autotest_common.sh@641 -- # es=1 00:09:32.809 00:29:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:32.809 00:29:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:32.809 00:29:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:32.809 00:29:05 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:32.809 00:29:05 -- rpc/skip_rpc.sh@23 -- # killprocess 110574 00:09:32.809 00:29:05 -- common/autotest_common.sh@936 -- # '[' -z 110574 ']' 00:09:32.809 00:29:05 -- common/autotest_common.sh@940 -- # kill -0 110574 00:09:32.809 00:29:05 -- common/autotest_common.sh@941 -- # uname 00:09:32.809 00:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:32.809 00:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110574 00:09:32.809 00:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:32.809 00:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:32.809 killing process with pid 110574 00:09:32.809 00:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110574' 00:09:32.809 00:29:05 -- common/autotest_common.sh@955 -- # kill 110574 00:09:32.809 00:29:05 -- common/autotest_common.sh@960 -- # wait 110574 00:09:34.231 00:09:34.231 real 0m7.116s 00:09:34.231 user 0m6.625s 00:09:34.231 sys 0m0.402s 00:09:34.231 00:29:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:34.231 00:29:07 -- common/autotest_common.sh@10 -- # set +x 00:09:34.231 ************************************ 00:09:34.231 END TEST skip_rpc 00:09:34.231 ************************************ 00:09:34.231 00:29:07 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:34.231 00:29:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:34.231 00:29:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.231 00:29:07 -- common/autotest_common.sh@10 -- # set +x 00:09:34.231 ************************************ 00:09:34.231 START TEST skip_rpc_with_json 00:09:34.231 ************************************ 00:09:34.231 00:29:07 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:09:34.231 00:29:07 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:34.231 00:29:07 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=110697 00:09:34.231 00:29:07 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:34.231 00:29:07 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.231 00:29:07 -- rpc/skip_rpc.sh@31 -- # waitforlisten 110697 00:09:34.231 00:29:07 -- common/autotest_common.sh@817 -- # '[' -z 110697 ']' 00:09:34.231 00:29:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.231 00:29:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:34.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.231 00:29:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.231 00:29:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:34.231 00:29:07 -- common/autotest_common.sh@10 -- # set +x 00:09:34.489 [2024-04-27 00:29:07.867571] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:34.489 [2024-04-27 00:29:07.867768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110697 ] 00:09:34.489 [2024-04-27 00:29:08.029951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.747 [2024-04-27 00:29:08.273349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.683 00:29:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:35.683 00:29:08 -- common/autotest_common.sh@850 -- # return 0 00:09:35.683 00:29:08 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:35.683 00:29:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.683 00:29:08 -- common/autotest_common.sh@10 -- # set +x 00:09:35.683 [2024-04-27 00:29:08.999670] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:35.683 request: 00:09:35.683 { 00:09:35.683 "trtype": "tcp", 00:09:35.683 "method": "nvmf_get_transports", 00:09:35.683 "req_id": 1 00:09:35.683 } 00:09:35.683 Got JSON-RPC error response 00:09:35.683 response: 00:09:35.683 { 00:09:35.683 "code": -19, 00:09:35.683 "message": "No such device" 00:09:35.683 } 00:09:35.683 00:29:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:35.683 00:29:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:35.683 00:29:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.683 00:29:09 -- common/autotest_common.sh@10 -- # set +x 00:09:35.683 [2024-04-27 00:29:09.011821] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.683 00:29:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:35.683 00:29:09 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:35.683 00:29:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.683 00:29:09 -- common/autotest_common.sh@10 -- # set +x 00:09:35.683 00:29:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:35.683 00:29:09 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:35.684 { 00:09:35.684 "subsystems": [ 00:09:35.684 { 00:09:35.684 "subsystem": "scheduler", 00:09:35.684 "config": [ 00:09:35.684 { 00:09:35.684 "method": "framework_set_scheduler", 00:09:35.684 "params": { 00:09:35.684 "name": "static" 00:09:35.684 } 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "vmd", 00:09:35.684 "config": [] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "sock", 00:09:35.684 "config": [ 00:09:35.684 { 00:09:35.684 "method": "sock_impl_set_options", 00:09:35.684 "params": { 00:09:35.684 "impl_name": "posix", 00:09:35.684 "recv_buf_size": 2097152, 00:09:35.684 "send_buf_size": 2097152, 00:09:35.684 "enable_recv_pipe": true, 00:09:35.684 "enable_quickack": false, 00:09:35.684 "enable_placement_id": 0, 00:09:35.684 "enable_zerocopy_send_server": true, 00:09:35.684 "enable_zerocopy_send_client": false, 00:09:35.684 "zerocopy_threshold": 0, 00:09:35.684 "tls_version": 0, 00:09:35.684 "enable_ktls": false 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "sock_impl_set_options", 00:09:35.684 "params": { 00:09:35.684 "impl_name": "ssl", 00:09:35.684 "recv_buf_size": 4096, 00:09:35.684 "send_buf_size": 4096, 00:09:35.684 "enable_recv_pipe": true, 00:09:35.684 "enable_quickack": false, 00:09:35.684 "enable_placement_id": 0, 00:09:35.684 "enable_zerocopy_send_server": true, 00:09:35.684 "enable_zerocopy_send_client": false, 00:09:35.684 "zerocopy_threshold": 0, 00:09:35.684 "tls_version": 0, 00:09:35.684 "enable_ktls": false 00:09:35.684 } 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "iobuf", 00:09:35.684 "config": [ 00:09:35.684 { 00:09:35.684 "method": "iobuf_set_options", 00:09:35.684 "params": { 00:09:35.684 "small_pool_count": 8192, 00:09:35.684 "large_pool_count": 1024, 00:09:35.684 "small_bufsize": 8192, 00:09:35.684 "large_bufsize": 135168 00:09:35.684 } 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "keyring", 00:09:35.684 "config": [] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "accel", 00:09:35.684 "config": [ 00:09:35.684 { 00:09:35.684 "method": "accel_set_options", 00:09:35.684 "params": { 00:09:35.684 "small_cache_size": 128, 00:09:35.684 "large_cache_size": 16, 00:09:35.684 "task_count": 2048, 00:09:35.684 "sequence_count": 2048, 00:09:35.684 "buf_count": 2048 00:09:35.684 } 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "bdev", 00:09:35.684 "config": [ 00:09:35.684 { 00:09:35.684 "method": "bdev_set_options", 00:09:35.684 "params": { 00:09:35.684 "bdev_io_pool_size": 65535, 00:09:35.684 "bdev_io_cache_size": 256, 00:09:35.684 "bdev_auto_examine": true, 00:09:35.684 "iobuf_small_cache_size": 128, 00:09:35.684 "iobuf_large_cache_size": 16 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "bdev_raid_set_options", 00:09:35.684 "params": { 00:09:35.684 "process_window_size_kb": 1024 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "bdev_nvme_set_options", 00:09:35.684 "params": { 00:09:35.684 "action_on_timeout": "none", 00:09:35.684 "timeout_us": 0, 00:09:35.684 "timeout_admin_us": 0, 00:09:35.684 "keep_alive_timeout_ms": 10000, 00:09:35.684 "arbitration_burst": 0, 00:09:35.684 "low_priority_weight": 0, 00:09:35.684 "medium_priority_weight": 0, 00:09:35.684 "high_priority_weight": 0, 00:09:35.684 "nvme_adminq_poll_period_us": 10000, 00:09:35.684 "nvme_ioq_poll_period_us": 0, 00:09:35.684 "io_queue_requests": 0, 00:09:35.684 "delay_cmd_submit": true, 00:09:35.684 "transport_retry_count": 4, 00:09:35.684 "bdev_retry_count": 3, 00:09:35.684 "transport_ack_timeout": 0, 00:09:35.684 "ctrlr_loss_timeout_sec": 0, 00:09:35.684 "reconnect_delay_sec": 0, 00:09:35.684 "fast_io_fail_timeout_sec": 0, 00:09:35.684 "disable_auto_failback": false, 00:09:35.684 "generate_uuids": false, 00:09:35.684 "transport_tos": 0, 00:09:35.684 "nvme_error_stat": false, 00:09:35.684 "rdma_srq_size": 0, 00:09:35.684 "io_path_stat": false, 00:09:35.684 "allow_accel_sequence": false, 00:09:35.684 "rdma_max_cq_size": 0, 00:09:35.684 "rdma_cm_event_timeout_ms": 0, 00:09:35.684 "dhchap_digests": [ 00:09:35.684 "sha256", 00:09:35.684 "sha384", 00:09:35.684 "sha512" 00:09:35.684 ], 00:09:35.684 "dhchap_dhgroups": [ 00:09:35.684 "null", 00:09:35.684 "ffdhe2048", 00:09:35.684 "ffdhe3072", 00:09:35.684 "ffdhe4096", 00:09:35.684 "ffdhe6144", 00:09:35.684 "ffdhe8192" 00:09:35.684 ] 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "bdev_nvme_set_hotplug", 00:09:35.684 "params": { 00:09:35.684 "period_us": 100000, 00:09:35.684 "enable": false 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "bdev_iscsi_set_options", 00:09:35.684 "params": { 00:09:35.684 "timeout_sec": 30 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "bdev_wait_for_examine" 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "nvmf", 00:09:35.684 "config": [ 00:09:35.684 { 00:09:35.684 "method": "nvmf_set_config", 00:09:35.684 "params": { 00:09:35.684 "discovery_filter": "match_any", 00:09:35.684 "admin_cmd_passthru": { 00:09:35.684 "identify_ctrlr": false 00:09:35.684 } 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "nvmf_set_max_subsystems", 00:09:35.684 "params": { 00:09:35.684 "max_subsystems": 1024 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "nvmf_set_crdt", 00:09:35.684 "params": { 00:09:35.684 "crdt1": 0, 00:09:35.684 "crdt2": 0, 00:09:35.684 "crdt3": 0 00:09:35.684 } 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "method": "nvmf_create_transport", 00:09:35.684 "params": { 00:09:35.684 "trtype": "TCP", 00:09:35.684 "max_queue_depth": 128, 00:09:35.684 "max_io_qpairs_per_ctrlr": 127, 00:09:35.684 "in_capsule_data_size": 4096, 00:09:35.684 "max_io_size": 131072, 00:09:35.684 "io_unit_size": 131072, 00:09:35.684 "max_aq_depth": 128, 00:09:35.684 "num_shared_buffers": 511, 00:09:35.684 "buf_cache_size": 4294967295, 00:09:35.684 "dif_insert_or_strip": false, 00:09:35.684 "zcopy": false, 00:09:35.684 "c2h_success": true, 00:09:35.684 "sock_priority": 0, 00:09:35.684 "abort_timeout_sec": 1, 00:09:35.684 "ack_timeout": 0, 00:09:35.684 "data_wr_pool_size": 0 00:09:35.684 } 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "nbd", 00:09:35.684 "config": [] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "vhost_blk", 00:09:35.684 "config": [] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "scsi", 00:09:35.684 "config": null 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "iscsi", 00:09:35.684 "config": [ 00:09:35.684 { 00:09:35.684 "method": "iscsi_set_options", 00:09:35.684 "params": { 00:09:35.684 "node_base": "iqn.2016-06.io.spdk", 00:09:35.684 "max_sessions": 128, 00:09:35.684 "max_connections_per_session": 2, 00:09:35.684 "max_queue_depth": 64, 00:09:35.684 "default_time2wait": 2, 00:09:35.684 "default_time2retain": 20, 00:09:35.684 "first_burst_length": 8192, 00:09:35.684 "immediate_data": true, 00:09:35.684 "allow_duplicated_isid": false, 00:09:35.684 "error_recovery_level": 0, 00:09:35.684 "nop_timeout": 60, 00:09:35.684 "nop_in_interval": 30, 00:09:35.684 "disable_chap": false, 00:09:35.684 "require_chap": false, 00:09:35.684 "mutual_chap": false, 00:09:35.684 "chap_group": 0, 00:09:35.684 "max_large_datain_per_connection": 64, 00:09:35.684 "max_r2t_per_connection": 4, 00:09:35.684 "pdu_pool_size": 36864, 00:09:35.684 "immediate_data_pool_size": 16384, 00:09:35.684 "data_out_pool_size": 2048 00:09:35.684 } 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 }, 00:09:35.684 { 00:09:35.684 "subsystem": "vhost_scsi", 00:09:35.684 "config": [] 00:09:35.684 } 00:09:35.684 ] 00:09:35.684 } 00:09:35.684 00:29:09 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:35.684 00:29:09 -- rpc/skip_rpc.sh@40 -- # killprocess 110697 00:09:35.684 00:29:09 -- common/autotest_common.sh@936 -- # '[' -z 110697 ']' 00:09:35.684 00:29:09 -- common/autotest_common.sh@940 -- # kill -0 110697 00:09:35.684 00:29:09 -- common/autotest_common.sh@941 -- # uname 00:09:35.684 00:29:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:35.684 00:29:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110697 00:09:35.684 00:29:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:35.684 killing process with pid 110697 00:09:35.684 00:29:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:35.684 00:29:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110697' 00:09:35.684 00:29:09 -- common/autotest_common.sh@955 -- # kill 110697 00:09:35.684 00:29:09 -- common/autotest_common.sh@960 -- # wait 110697 00:09:37.589 00:29:11 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=110756 00:09:37.589 00:29:11 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:37.589 00:29:11 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:42.866 00:29:16 -- rpc/skip_rpc.sh@50 -- # killprocess 110756 00:09:42.866 00:29:16 -- common/autotest_common.sh@936 -- # '[' -z 110756 ']' 00:09:42.866 00:29:16 -- common/autotest_common.sh@940 -- # kill -0 110756 00:09:42.866 00:29:16 -- common/autotest_common.sh@941 -- # uname 00:09:42.866 00:29:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:42.866 00:29:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110756 00:09:42.866 00:29:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:42.866 00:29:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:42.866 killing process with pid 110756 00:09:42.866 00:29:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110756' 00:09:42.866 00:29:16 -- common/autotest_common.sh@955 -- # kill 110756 00:09:42.866 00:29:16 -- common/autotest_common.sh@960 -- # wait 110756 00:09:44.771 00:29:18 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:44.771 00:29:18 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:44.771 00:09:44.771 real 0m10.320s 00:09:44.771 user 0m9.807s 00:09:44.771 sys 0m0.855s 00:09:44.771 00:29:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:44.771 00:29:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 ************************************ 00:09:44.771 END TEST skip_rpc_with_json 00:09:44.771 ************************************ 00:09:44.771 00:29:18 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:44.771 00:29:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:44.771 00:29:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.771 00:29:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 ************************************ 00:09:44.771 START TEST skip_rpc_with_delay 00:09:44.771 ************************************ 00:09:44.771 00:29:18 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:09:44.771 00:29:18 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:44.771 00:29:18 -- common/autotest_common.sh@638 -- # local es=0 00:09:44.771 00:29:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:44.771 00:29:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:44.771 00:29:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:44.771 00:29:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:44.771 00:29:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:44.771 00:29:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:44.771 00:29:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:44.771 00:29:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:44.771 00:29:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:44.771 00:29:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:44.771 [2024-04-27 00:29:18.297206] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:44.771 [2024-04-27 00:29:18.297463] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:44.771 00:29:18 -- common/autotest_common.sh@641 -- # es=1 00:09:44.771 00:29:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:44.771 00:29:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:44.771 00:29:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:44.771 00:09:44.771 real 0m0.130s 00:09:44.771 user 0m0.074s 00:09:44.771 sys 0m0.056s 00:09:44.771 00:29:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:44.771 00:29:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.771 ************************************ 00:09:44.771 END TEST skip_rpc_with_delay 00:09:44.771 ************************************ 00:09:45.030 00:29:18 -- rpc/skip_rpc.sh@77 -- # uname 00:09:45.030 00:29:18 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:45.030 00:29:18 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:45.030 00:29:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:45.030 00:29:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.030 00:29:18 -- common/autotest_common.sh@10 -- # set +x 00:09:45.030 ************************************ 00:09:45.030 START TEST exit_on_failed_rpc_init 00:09:45.030 ************************************ 00:09:45.030 00:29:18 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:09:45.030 00:29:18 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:45.030 00:29:18 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=110905 00:09:45.030 00:29:18 -- rpc/skip_rpc.sh@63 -- # waitforlisten 110905 00:09:45.030 00:29:18 -- common/autotest_common.sh@817 -- # '[' -z 110905 ']' 00:09:45.030 00:29:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.030 00:29:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:45.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.030 00:29:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.030 00:29:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:45.030 00:29:18 -- common/autotest_common.sh@10 -- # set +x 00:09:45.030 [2024-04-27 00:29:18.511040] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:45.030 [2024-04-27 00:29:18.511290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110905 ] 00:09:45.289 [2024-04-27 00:29:18.676506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.289 [2024-04-27 00:29:18.863439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.225 00:29:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:46.225 00:29:19 -- common/autotest_common.sh@850 -- # return 0 00:09:46.226 00:29:19 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.226 00:29:19 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:46.226 00:29:19 -- common/autotest_common.sh@638 -- # local es=0 00:09:46.226 00:29:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:46.226 00:29:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.226 00:29:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:46.226 00:29:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.226 00:29:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:46.226 00:29:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.226 00:29:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:46.226 00:29:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:46.226 00:29:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:46.226 00:29:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:46.226 [2024-04-27 00:29:19.717731] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:46.226 [2024-04-27 00:29:19.718011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110928 ] 00:09:46.485 [2024-04-27 00:29:19.892684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.744 [2024-04-27 00:29:20.127960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.744 [2024-04-27 00:29:20.128098] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:46.744 [2024-04-27 00:29:20.128137] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:46.744 [2024-04-27 00:29:20.128159] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.003 00:29:20 -- common/autotest_common.sh@641 -- # es=234 00:09:47.003 00:29:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:47.003 00:29:20 -- common/autotest_common.sh@650 -- # es=106 00:09:47.003 00:29:20 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:47.004 00:29:20 -- common/autotest_common.sh@658 -- # es=1 00:09:47.004 00:29:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:47.004 00:29:20 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:47.004 00:29:20 -- rpc/skip_rpc.sh@70 -- # killprocess 110905 00:09:47.004 00:29:20 -- common/autotest_common.sh@936 -- # '[' -z 110905 ']' 00:09:47.004 00:29:20 -- common/autotest_common.sh@940 -- # kill -0 110905 00:09:47.004 00:29:20 -- common/autotest_common.sh@941 -- # uname 00:09:47.004 00:29:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:47.004 00:29:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110905 00:09:47.004 00:29:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:47.004 00:29:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:47.004 killing process with pid 110905 00:09:47.004 00:29:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110905' 00:09:47.004 00:29:20 -- common/autotest_common.sh@955 -- # kill 110905 00:09:47.004 00:29:20 -- common/autotest_common.sh@960 -- # wait 110905 00:09:49.548 00:09:49.548 real 0m4.149s 00:09:49.548 user 0m4.791s 00:09:49.548 sys 0m0.601s 00:09:49.548 00:29:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.548 00:29:22 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 ************************************ 00:09:49.548 END TEST exit_on_failed_rpc_init 00:09:49.548 ************************************ 00:09:49.548 00:29:22 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:49.548 00:09:49.548 real 0m22.151s 00:09:49.548 user 0m21.559s 00:09:49.548 sys 0m2.086s 00:09:49.548 00:29:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.548 00:29:22 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 ************************************ 00:09:49.548 END TEST skip_rpc 00:09:49.548 ************************************ 00:09:49.548 00:29:22 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:49.548 00:29:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.548 00:29:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.548 00:29:22 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 ************************************ 00:09:49.548 START TEST rpc_client 00:09:49.548 ************************************ 00:09:49.548 00:29:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:49.548 * Looking for test storage... 00:09:49.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:49.548 00:29:22 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:49.548 OK 00:09:49.548 00:29:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:49.548 00:09:49.548 real 0m0.151s 00:09:49.548 user 0m0.081s 00:09:49.548 sys 0m0.082s 00:09:49.548 00:29:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.548 00:29:22 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 ************************************ 00:09:49.548 END TEST rpc_client 00:09:49.548 ************************************ 00:09:49.548 00:29:22 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:49.548 00:29:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.548 00:29:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.548 00:29:22 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 ************************************ 00:09:49.548 START TEST json_config 00:09:49.548 ************************************ 00:09:49.548 00:29:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:49.548 00:29:23 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:49.548 00:29:23 -- nvmf/common.sh@7 -- # uname -s 00:09:49.548 00:29:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.548 00:29:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.548 00:29:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.548 00:29:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.548 00:29:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.548 00:29:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.548 00:29:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.548 00:29:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.548 00:29:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.548 00:29:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.548 00:29:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ab8a725-5863-4682-9828-5c400103450b 00:09:49.548 00:29:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ab8a725-5863-4682-9828-5c400103450b 00:09:49.548 00:29:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.548 00:29:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.548 00:29:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:49.548 00:29:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.548 00:29:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.548 00:29:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.548 00:29:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.548 00:29:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.548 00:29:23 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:49.548 00:29:23 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:49.548 00:29:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:49.548 00:29:23 -- paths/export.sh@5 -- # export PATH 00:09:49.548 00:29:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:49.548 00:29:23 -- nvmf/common.sh@47 -- # : 0 00:09:49.548 00:29:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.548 00:29:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.548 00:29:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.548 00:29:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.548 00:29:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.548 00:29:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.548 00:29:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.548 00:29:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.548 00:29:23 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:49.548 00:29:23 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:49.548 00:29:23 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:49.548 00:29:23 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:49.548 00:29:23 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:49.548 00:29:23 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:49.548 00:29:23 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:49.548 00:29:23 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:49.548 00:29:23 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:49.548 00:29:23 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:49.548 00:29:23 -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:49.548 00:29:23 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:49.548 00:29:23 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:49.548 00:29:23 -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:49.548 00:29:23 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:49.548 INFO: JSON configuration test init 00:09:49.548 00:29:23 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:49.548 00:29:23 -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:49.548 00:29:23 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:49.548 00:29:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:49.548 00:29:23 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 00:29:23 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:49.548 00:29:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:49.548 00:29:23 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 00:29:23 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:49.548 00:29:23 -- json_config/common.sh@9 -- # local app=target 00:09:49.548 00:29:23 -- json_config/common.sh@10 -- # shift 00:09:49.548 00:29:23 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:49.548 00:29:23 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:49.548 00:29:23 -- json_config/common.sh@15 -- # local app_extra_params= 00:09:49.548 00:29:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:49.548 00:29:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:49.548 00:29:23 -- json_config/common.sh@22 -- # app_pid["$app"]=111094 00:09:49.548 Waiting for target to run... 00:09:49.548 00:29:23 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:49.548 00:29:23 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:49.548 00:29:23 -- json_config/common.sh@25 -- # waitforlisten 111094 /var/tmp/spdk_tgt.sock 00:09:49.548 00:29:23 -- common/autotest_common.sh@817 -- # '[' -z 111094 ']' 00:09:49.548 00:29:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:49.548 00:29:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:49.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:49.548 00:29:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:49.548 00:29:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:49.548 00:29:23 -- common/autotest_common.sh@10 -- # set +x 00:09:49.548 [2024-04-27 00:29:23.129445] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:49.548 [2024-04-27 00:29:23.129645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111094 ] 00:09:50.116 [2024-04-27 00:29:23.590228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.375 [2024-04-27 00:29:23.767569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.633 00:29:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:50.633 00:29:24 -- common/autotest_common.sh@850 -- # return 0 00:09:50.633 00:09:50.633 00:29:24 -- json_config/common.sh@26 -- # echo '' 00:09:50.633 00:29:24 -- json_config/json_config.sh@269 -- # create_accel_config 00:09:50.633 00:29:24 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:50.633 00:29:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:50.633 00:29:24 -- common/autotest_common.sh@10 -- # set +x 00:09:50.633 00:29:24 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:50.633 00:29:24 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:50.633 00:29:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:50.634 00:29:24 -- common/autotest_common.sh@10 -- # set +x 00:09:50.634 00:29:24 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:50.634 00:29:24 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:50.634 00:29:24 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:51.569 00:29:25 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:51.569 00:29:25 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:51.569 00:29:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:51.569 00:29:25 -- common/autotest_common.sh@10 -- # set +x 00:09:51.569 00:29:25 -- json_config/json_config.sh@45 -- # local ret=0 00:09:51.569 00:29:25 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:51.569 00:29:25 -- json_config/json_config.sh@46 -- # local enabled_types 00:09:51.569 00:29:25 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:51.569 00:29:25 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:51.569 00:29:25 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:51.828 00:29:25 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:51.828 00:29:25 -- json_config/json_config.sh@48 -- # local get_types 00:09:51.828 00:29:25 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:51.828 00:29:25 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:51.828 00:29:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:51.828 00:29:25 -- common/autotest_common.sh@10 -- # set +x 00:09:51.828 00:29:25 -- json_config/json_config.sh@55 -- # return 0 00:09:51.828 00:29:25 -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:51.828 00:29:25 -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:51.828 00:29:25 -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:51.828 00:29:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:51.828 00:29:25 -- common/autotest_common.sh@10 -- # set +x 00:09:51.828 00:29:25 -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:51.828 00:29:25 -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:51.828 00:29:25 -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:51.828 00:29:25 -- json_config/json_config.sh@111 -- # get_notifications 00:09:51.828 00:29:25 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:51.828 00:29:25 -- json_config/json_config.sh@61 -- # IFS=: 00:09:51.828 00:29:25 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:51.828 00:29:25 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:51.828 00:29:25 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:51.828 00:29:25 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:52.087 00:29:25 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:52.087 00:29:25 -- json_config/json_config.sh@61 -- # IFS=: 00:09:52.087 00:29:25 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:52.087 00:29:25 -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:52.087 00:29:25 -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:52.087 00:29:25 -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:52.087 00:29:25 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:52.345 Nvme0n1p0 Nvme0n1p1 00:09:52.345 00:29:25 -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:52.345 00:29:25 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:52.605 [2024-04-27 00:29:26.105535] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:52.605 [2024-04-27 00:29:26.105698] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:52.605 00:09:52.605 00:29:26 -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:52.605 00:29:26 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:52.863 Malloc3 00:09:52.863 00:29:26 -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:52.863 00:29:26 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:53.123 [2024-04-27 00:29:26.694567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:53.123 [2024-04-27 00:29:26.694758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.123 [2024-04-27 00:29:26.694820] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:53.123 [2024-04-27 00:29:26.694850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.123 [2024-04-27 00:29:26.697785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.123 [2024-04-27 00:29:26.697882] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:53.123 PTBdevFromMalloc3 00:09:53.382 00:29:26 -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:53.382 00:29:26 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:53.382 Null0 00:09:53.382 00:29:26 -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:53.382 00:29:26 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:53.641 Malloc0 00:09:53.899 00:29:27 -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:53.899 00:29:27 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:53.899 Malloc1 00:09:53.899 00:29:27 -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:53.899 00:29:27 -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:54.473 102400+0 records in 00:09:54.473 102400+0 records out 00:09:54.473 104857600 bytes (105 MB, 100 MiB) copied, 0.336607 s, 312 MB/s 00:09:54.473 00:29:27 -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:54.473 00:29:27 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:54.745 aio_disk 00:09:54.745 00:29:28 -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:54.745 00:29:28 -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:54.745 00:29:28 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:54.745 0263e9ec-fc69-4269-aeeb-72552fc17ef6 00:09:54.745 00:29:28 -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:54.745 00:29:28 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:54.745 00:29:28 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:55.004 00:29:28 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:55.004 00:29:28 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:55.262 00:29:28 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:55.262 00:29:28 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:55.521 00:29:29 -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:55.521 00:29:29 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:55.779 00:29:29 -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:09:55.779 00:29:29 -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:09:55.779 00:29:29 -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8d025b0c-e80b-4c95-a959-501e0827a7b0 bdev_register:6150073e-87d3-43b6-9cc6-c33e3224565a bdev_register:863a52d1-eaf2-4f39-af4a-eb95285d94c5 bdev_register:423894c4-c8a5-4a58-b163-0cef336e589e 00:09:55.779 00:29:29 -- json_config/json_config.sh@67 -- # local events_to_check 00:09:55.779 00:29:29 -- json_config/json_config.sh@68 -- # local recorded_events 00:09:55.779 00:29:29 -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:55.779 00:29:29 -- json_config/json_config.sh@71 -- # sort 00:09:55.780 00:29:29 -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8d025b0c-e80b-4c95-a959-501e0827a7b0 bdev_register:6150073e-87d3-43b6-9cc6-c33e3224565a bdev_register:863a52d1-eaf2-4f39-af4a-eb95285d94c5 bdev_register:423894c4-c8a5-4a58-b163-0cef336e589e 00:09:55.780 00:29:29 -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:09:55.780 00:29:29 -- json_config/json_config.sh@72 -- # get_notifications 00:09:55.780 00:29:29 -- json_config/json_config.sh@72 -- # sort 00:09:55.780 00:29:29 -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:55.780 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.780 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.780 00:29:29 -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:55.780 00:29:29 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:55.780 00:29:29 -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:8d025b0c-e80b-4c95-a959-501e0827a7b0 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:6150073e-87d3-43b6-9cc6-c33e3224565a 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:863a52d1-eaf2-4f39-af4a-eb95285d94c5 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@62 -- # echo bdev_register:423894c4-c8a5-4a58-b163-0cef336e589e 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # IFS=: 00:09:56.039 00:29:29 -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:56.039 00:29:29 -- json_config/json_config.sh@74 -- # [[ bdev_register:423894c4-c8a5-4a58-b163-0cef336e589e bdev_register:6150073e-87d3-43b6-9cc6-c33e3224565a bdev_register:863a52d1-eaf2-4f39-af4a-eb95285d94c5 bdev_register:8d025b0c-e80b-4c95-a959-501e0827a7b0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\2\3\8\9\4\c\4\-\c\8\a\5\-\4\a\5\8\-\b\1\6\3\-\0\c\e\f\3\3\6\e\5\8\9\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\1\5\0\0\7\3\e\-\8\7\d\3\-\4\3\b\6\-\9\c\c\6\-\c\3\3\e\3\2\2\4\5\6\5\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\6\3\a\5\2\d\1\-\e\a\f\2\-\4\f\3\9\-\a\f\4\a\-\e\b\9\5\2\8\5\d\9\4\c\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\d\0\2\5\b\0\c\-\e\8\0\b\-\4\c\9\5\-\a\9\5\9\-\5\0\1\e\0\8\2\7\a\7\b\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:09:56.039 00:29:29 -- json_config/json_config.sh@86 -- # cat 00:09:56.039 00:29:29 -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:423894c4-c8a5-4a58-b163-0cef336e589e bdev_register:6150073e-87d3-43b6-9cc6-c33e3224565a bdev_register:863a52d1-eaf2-4f39-af4a-eb95285d94c5 bdev_register:8d025b0c-e80b-4c95-a959-501e0827a7b0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:09:56.039 Expected events matched: 00:09:56.039 bdev_register:423894c4-c8a5-4a58-b163-0cef336e589e 00:09:56.039 bdev_register:6150073e-87d3-43b6-9cc6-c33e3224565a 00:09:56.039 bdev_register:863a52d1-eaf2-4f39-af4a-eb95285d94c5 00:09:56.039 bdev_register:8d025b0c-e80b-4c95-a959-501e0827a7b0 00:09:56.039 bdev_register:Malloc0 00:09:56.039 bdev_register:Malloc0p0 00:09:56.039 bdev_register:Malloc0p1 00:09:56.039 bdev_register:Malloc0p2 00:09:56.039 bdev_register:Malloc1 00:09:56.039 bdev_register:Malloc3 00:09:56.039 bdev_register:Null0 00:09:56.039 bdev_register:Nvme0n1 00:09:56.039 bdev_register:Nvme0n1p0 00:09:56.039 bdev_register:Nvme0n1p1 00:09:56.039 bdev_register:PTBdevFromMalloc3 00:09:56.039 bdev_register:aio_disk 00:09:56.039 00:29:29 -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:09:56.039 00:29:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:56.039 00:29:29 -- common/autotest_common.sh@10 -- # set +x 00:09:56.298 00:29:29 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:56.298 00:29:29 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:56.298 00:29:29 -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:56.298 00:29:29 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:56.298 00:29:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:56.298 00:29:29 -- common/autotest_common.sh@10 -- # set +x 00:09:56.298 00:29:29 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:56.298 00:29:29 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:56.298 00:29:29 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:56.557 MallocBdevForConfigChangeCheck 00:09:56.557 00:29:29 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:56.557 00:29:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:56.557 00:29:29 -- common/autotest_common.sh@10 -- # set +x 00:09:56.557 00:29:30 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:56.557 00:29:30 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:56.816 INFO: shutting down applications... 00:09:56.816 00:29:30 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:09:56.816 00:29:30 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:09:56.816 00:29:30 -- json_config/json_config.sh@368 -- # json_config_clear target 00:09:56.816 00:29:30 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:09:56.816 00:29:30 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:57.074 [2024-04-27 00:29:30.510976] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:57.332 Calling clear_vhost_scsi_subsystem 00:09:57.332 Calling clear_iscsi_subsystem 00:09:57.332 Calling clear_vhost_blk_subsystem 00:09:57.332 Calling clear_nbd_subsystem 00:09:57.332 Calling clear_nvmf_subsystem 00:09:57.332 Calling clear_bdev_subsystem 00:09:57.332 00:29:30 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:57.332 00:29:30 -- json_config/json_config.sh@343 -- # count=100 00:09:57.332 00:29:30 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:09:57.332 00:29:30 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:57.332 00:29:30 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:57.332 00:29:30 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:57.590 00:29:31 -- json_config/json_config.sh@345 -- # break 00:09:57.590 00:29:31 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:09:57.590 00:29:31 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:09:57.590 00:29:31 -- json_config/common.sh@31 -- # local app=target 00:09:57.590 00:29:31 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:57.590 00:29:31 -- json_config/common.sh@35 -- # [[ -n 111094 ]] 00:09:57.590 00:29:31 -- json_config/common.sh@38 -- # kill -SIGINT 111094 00:09:57.590 00:29:31 -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:57.590 00:29:31 -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:57.590 00:29:31 -- json_config/common.sh@41 -- # kill -0 111094 00:09:57.590 00:29:31 -- json_config/common.sh@45 -- # sleep 0.5 00:09:58.156 00:29:31 -- json_config/common.sh@40 -- # (( i++ )) 00:09:58.156 00:29:31 -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.156 00:29:31 -- json_config/common.sh@41 -- # kill -0 111094 00:09:58.156 00:29:31 -- json_config/common.sh@45 -- # sleep 0.5 00:09:58.724 00:29:32 -- json_config/common.sh@40 -- # (( i++ )) 00:09:58.724 00:29:32 -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.724 00:29:32 -- json_config/common.sh@41 -- # kill -0 111094 00:09:58.724 00:29:32 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:58.724 00:29:32 -- json_config/common.sh@43 -- # break 00:09:58.724 00:29:32 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:58.724 SPDK target shutdown done 00:09:58.724 00:29:32 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:58.724 INFO: relaunching applications... 00:09:58.724 00:29:32 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:09:58.724 00:29:32 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:58.724 00:29:32 -- json_config/common.sh@9 -- # local app=target 00:09:58.724 00:29:32 -- json_config/common.sh@10 -- # shift 00:09:58.724 00:29:32 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:58.724 00:29:32 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:58.724 00:29:32 -- json_config/common.sh@15 -- # local app_extra_params= 00:09:58.724 00:29:32 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:58.724 00:29:32 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:58.724 00:29:32 -- json_config/common.sh@22 -- # app_pid["$app"]=111363 00:09:58.724 Waiting for target to run... 00:09:58.724 00:29:32 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:58.724 00:29:32 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:58.724 00:29:32 -- json_config/common.sh@25 -- # waitforlisten 111363 /var/tmp/spdk_tgt.sock 00:09:58.724 00:29:32 -- common/autotest_common.sh@817 -- # '[' -z 111363 ']' 00:09:58.724 00:29:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:58.724 00:29:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:58.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:58.724 00:29:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:58.724 00:29:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:58.724 00:29:32 -- common/autotest_common.sh@10 -- # set +x 00:09:58.724 [2024-04-27 00:29:32.170042] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:09:58.724 [2024-04-27 00:29:32.170259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111363 ] 00:09:59.291 [2024-04-27 00:29:32.653047] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.291 [2024-04-27 00:29:32.811175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.865 [2024-04-27 00:29:33.436690] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:59.865 [2024-04-27 00:29:33.436801] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:59.865 [2024-04-27 00:29:33.444659] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:59.865 [2024-04-27 00:29:33.444742] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:00.123 [2024-04-27 00:29:33.452747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:00.123 [2024-04-27 00:29:33.452871] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:00.123 [2024-04-27 00:29:33.452930] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:00.123 [2024-04-27 00:29:33.544399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:00.123 [2024-04-27 00:29:33.544514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.123 [2024-04-27 00:29:33.544558] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:00.123 [2024-04-27 00:29:33.544591] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.123 [2024-04-27 00:29:33.545130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.123 [2024-04-27 00:29:33.545175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:00.123 00:29:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:00.123 00:29:33 -- common/autotest_common.sh@850 -- # return 0 00:10:00.123 00:10:00.123 00:29:33 -- json_config/common.sh@26 -- # echo '' 00:10:00.123 00:29:33 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:00.123 INFO: Checking if target configuration is the same... 00:10:00.123 00:29:33 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:00.123 00:29:33 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:00.123 00:29:33 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:00.123 00:29:33 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:00.123 + '[' 2 -ne 2 ']' 00:10:00.123 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:00.123 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:00.123 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:00.123 +++ basename /dev/fd/62 00:10:00.123 ++ mktemp /tmp/62.XXX 00:10:00.123 + tmp_file_1=/tmp/62.55B 00:10:00.382 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:00.382 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:00.382 + tmp_file_2=/tmp/spdk_tgt_config.json.Iue 00:10:00.382 + ret=0 00:10:00.382 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.639 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.639 + diff -u /tmp/62.55B /tmp/spdk_tgt_config.json.Iue 00:10:00.639 INFO: JSON config files are the same 00:10:00.639 + echo 'INFO: JSON config files are the same' 00:10:00.639 + rm /tmp/62.55B /tmp/spdk_tgt_config.json.Iue 00:10:00.639 + exit 0 00:10:00.639 00:29:34 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:00.639 00:29:34 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:00.639 INFO: changing configuration and checking if this can be detected... 00:10:00.639 00:29:34 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:00.639 00:29:34 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:00.896 00:29:34 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:00.896 00:29:34 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:00.896 00:29:34 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:00.896 + '[' 2 -ne 2 ']' 00:10:00.896 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:00.896 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:00.896 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:00.896 +++ basename /dev/fd/62 00:10:00.896 ++ mktemp /tmp/62.XXX 00:10:00.896 + tmp_file_1=/tmp/62.v0L 00:10:00.896 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:00.896 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:00.896 + tmp_file_2=/tmp/spdk_tgt_config.json.eD3 00:10:00.896 + ret=0 00:10:00.896 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:01.463 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:01.463 + diff -u /tmp/62.v0L /tmp/spdk_tgt_config.json.eD3 00:10:01.463 + ret=1 00:10:01.463 + echo '=== Start of file: /tmp/62.v0L ===' 00:10:01.463 + cat /tmp/62.v0L 00:10:01.463 + echo '=== End of file: /tmp/62.v0L ===' 00:10:01.463 + echo '' 00:10:01.463 + echo '=== Start of file: /tmp/spdk_tgt_config.json.eD3 ===' 00:10:01.463 + cat /tmp/spdk_tgt_config.json.eD3 00:10:01.463 + echo '=== End of file: /tmp/spdk_tgt_config.json.eD3 ===' 00:10:01.463 + echo '' 00:10:01.463 + rm /tmp/62.v0L /tmp/spdk_tgt_config.json.eD3 00:10:01.463 + exit 1 00:10:01.463 00:29:34 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:01.463 INFO: configuration change detected. 00:10:01.463 00:29:34 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:01.463 00:29:34 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:01.463 00:29:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:01.463 00:29:34 -- common/autotest_common.sh@10 -- # set +x 00:10:01.463 00:29:34 -- json_config/json_config.sh@307 -- # local ret=0 00:10:01.463 00:29:34 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:01.463 00:29:34 -- json_config/json_config.sh@317 -- # [[ -n 111363 ]] 00:10:01.463 00:29:34 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:01.463 00:29:34 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:01.463 00:29:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:01.463 00:29:34 -- common/autotest_common.sh@10 -- # set +x 00:10:01.463 00:29:34 -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:01.463 00:29:34 -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:01.463 00:29:34 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:01.463 00:29:35 -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:01.463 00:29:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:01.721 00:29:35 -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:01.721 00:29:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:01.980 00:29:35 -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:01.980 00:29:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:02.238 00:29:35 -- json_config/json_config.sh@193 -- # uname -s 00:10:02.238 00:29:35 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:02.238 00:29:35 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:02.238 00:29:35 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:02.238 00:29:35 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:02.238 00:29:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:02.238 00:29:35 -- common/autotest_common.sh@10 -- # set +x 00:10:02.238 00:29:35 -- json_config/json_config.sh@323 -- # killprocess 111363 00:10:02.238 00:29:35 -- common/autotest_common.sh@936 -- # '[' -z 111363 ']' 00:10:02.238 00:29:35 -- common/autotest_common.sh@940 -- # kill -0 111363 00:10:02.238 00:29:35 -- common/autotest_common.sh@941 -- # uname 00:10:02.238 00:29:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:02.238 00:29:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111363 00:10:02.238 00:29:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:02.238 00:29:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:02.238 killing process with pid 111363 00:10:02.238 00:29:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111363' 00:10:02.238 00:29:35 -- common/autotest_common.sh@955 -- # kill 111363 00:10:02.238 00:29:35 -- common/autotest_common.sh@960 -- # wait 111363 00:10:03.175 00:29:36 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:03.175 00:29:36 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:03.175 00:29:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:03.175 00:29:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.435 00:29:36 -- json_config/json_config.sh@328 -- # return 0 00:10:03.435 INFO: Success 00:10:03.435 00:29:36 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:03.435 00:10:03.435 real 0m13.846s 00:10:03.435 user 0m19.791s 00:10:03.435 sys 0m2.579s 00:10:03.435 00:29:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:03.435 00:29:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.435 ************************************ 00:10:03.435 END TEST json_config 00:10:03.435 ************************************ 00:10:03.435 00:29:36 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:03.435 00:29:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:03.435 00:29:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.435 00:29:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.435 ************************************ 00:10:03.435 START TEST json_config_extra_key 00:10:03.435 ************************************ 00:10:03.435 00:29:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.435 00:29:36 -- nvmf/common.sh@7 -- # uname -s 00:10:03.435 00:29:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.435 00:29:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.435 00:29:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.435 00:29:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.435 00:29:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.435 00:29:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.435 00:29:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.435 00:29:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.435 00:29:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.435 00:29:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.435 00:29:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dbb8633-76d4-4edf-9de8-a6a4f4c48058 00:10:03.435 00:29:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=1dbb8633-76d4-4edf-9de8-a6a4f4c48058 00:10:03.435 00:29:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.435 00:29:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.435 00:29:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:03.435 00:29:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.435 00:29:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.435 00:29:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.435 00:29:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.435 00:29:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.435 00:29:36 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:03.435 00:29:36 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:03.435 00:29:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:03.435 00:29:36 -- paths/export.sh@5 -- # export PATH 00:10:03.435 00:29:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:03.435 00:29:36 -- nvmf/common.sh@47 -- # : 0 00:10:03.435 00:29:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.435 00:29:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.435 00:29:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.435 00:29:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.435 00:29:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.435 00:29:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.435 00:29:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.435 00:29:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:03.435 INFO: launching applications... 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:03.435 00:29:36 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:03.435 00:29:36 -- json_config/common.sh@9 -- # local app=target 00:10:03.435 00:29:36 -- json_config/common.sh@10 -- # shift 00:10:03.435 00:29:36 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:03.435 00:29:36 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:03.435 00:29:36 -- json_config/common.sh@15 -- # local app_extra_params= 00:10:03.435 00:29:36 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:03.435 00:29:36 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:03.435 00:29:36 -- json_config/common.sh@22 -- # app_pid["$app"]=111547 00:10:03.435 00:29:36 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:03.435 Waiting for target to run... 00:10:03.435 00:29:36 -- json_config/common.sh@25 -- # waitforlisten 111547 /var/tmp/spdk_tgt.sock 00:10:03.435 00:29:36 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:03.435 00:29:36 -- common/autotest_common.sh@817 -- # '[' -z 111547 ']' 00:10:03.435 00:29:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:03.435 00:29:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:03.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:03.435 00:29:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:03.436 00:29:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:03.436 00:29:36 -- common/autotest_common.sh@10 -- # set +x 00:10:03.695 [2024-04-27 00:29:37.046224] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:03.695 [2024-04-27 00:29:37.046496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111547 ] 00:10:03.954 [2024-04-27 00:29:37.512803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.212 [2024-04-27 00:29:37.685079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.780 00:29:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:04.780 00:29:38 -- common/autotest_common.sh@850 -- # return 0 00:10:04.780 00:10:04.780 00:29:38 -- json_config/common.sh@26 -- # echo '' 00:10:04.780 INFO: shutting down applications... 00:10:04.780 00:29:38 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:04.780 00:29:38 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:04.780 00:29:38 -- json_config/common.sh@31 -- # local app=target 00:10:04.780 00:29:38 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:04.780 00:29:38 -- json_config/common.sh@35 -- # [[ -n 111547 ]] 00:10:04.780 00:29:38 -- json_config/common.sh@38 -- # kill -SIGINT 111547 00:10:04.780 00:29:38 -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:04.780 00:29:38 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:04.780 00:29:38 -- json_config/common.sh@41 -- # kill -0 111547 00:10:04.780 00:29:38 -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.348 00:29:38 -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.348 00:29:38 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.348 00:29:38 -- json_config/common.sh@41 -- # kill -0 111547 00:10:05.348 00:29:38 -- json_config/common.sh@45 -- # sleep 0.5 00:10:05.916 00:29:39 -- json_config/common.sh@40 -- # (( i++ )) 00:10:05.916 00:29:39 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.916 00:29:39 -- json_config/common.sh@41 -- # kill -0 111547 00:10:05.916 00:29:39 -- json_config/common.sh@45 -- # sleep 0.5 00:10:06.175 00:29:39 -- json_config/common.sh@40 -- # (( i++ )) 00:10:06.175 00:29:39 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.175 00:29:39 -- json_config/common.sh@41 -- # kill -0 111547 00:10:06.175 00:29:39 -- json_config/common.sh@45 -- # sleep 0.5 00:10:06.742 00:29:40 -- json_config/common.sh@40 -- # (( i++ )) 00:10:06.742 00:29:40 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.742 00:29:40 -- json_config/common.sh@41 -- # kill -0 111547 00:10:06.742 00:29:40 -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.311 00:29:40 -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.311 00:29:40 -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.311 00:29:40 -- json_config/common.sh@41 -- # kill -0 111547 00:10:07.311 00:29:40 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:07.311 00:29:40 -- json_config/common.sh@43 -- # break 00:10:07.311 00:29:40 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:07.311 SPDK target shutdown done 00:10:07.311 00:29:40 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:07.311 Success 00:10:07.311 00:29:40 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:07.311 00:10:07.311 real 0m3.836s 00:10:07.311 user 0m3.393s 00:10:07.311 sys 0m0.535s 00:10:07.311 00:29:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:07.311 00:29:40 -- common/autotest_common.sh@10 -- # set +x 00:10:07.311 ************************************ 00:10:07.311 END TEST json_config_extra_key 00:10:07.311 ************************************ 00:10:07.311 00:29:40 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:07.311 00:29:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:07.311 00:29:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.311 00:29:40 -- common/autotest_common.sh@10 -- # set +x 00:10:07.311 ************************************ 00:10:07.311 START TEST alias_rpc 00:10:07.311 ************************************ 00:10:07.311 00:29:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:07.311 * Looking for test storage... 00:10:07.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:07.570 00:29:40 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:07.570 00:29:40 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=111661 00:10:07.570 00:29:40 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:07.570 00:29:40 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 111661 00:10:07.570 00:29:40 -- common/autotest_common.sh@817 -- # '[' -z 111661 ']' 00:10:07.570 00:29:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.570 00:29:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:07.570 00:29:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.570 00:29:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:07.570 00:29:40 -- common/autotest_common.sh@10 -- # set +x 00:10:07.570 [2024-04-27 00:29:40.980436] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:07.570 [2024-04-27 00:29:40.980640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111661 ] 00:10:07.570 [2024-04-27 00:29:41.148967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.830 [2024-04-27 00:29:41.334625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.768 00:29:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:08.768 00:29:42 -- common/autotest_common.sh@850 -- # return 0 00:10:08.768 00:29:42 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:08.768 00:29:42 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 111661 00:10:08.768 00:29:42 -- common/autotest_common.sh@936 -- # '[' -z 111661 ']' 00:10:08.768 00:29:42 -- common/autotest_common.sh@940 -- # kill -0 111661 00:10:08.768 00:29:42 -- common/autotest_common.sh@941 -- # uname 00:10:08.768 00:29:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:08.768 00:29:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111661 00:10:08.768 00:29:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:08.768 00:29:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:08.768 killing process with pid 111661 00:10:08.768 00:29:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111661' 00:10:08.768 00:29:42 -- common/autotest_common.sh@955 -- # kill 111661 00:10:08.768 00:29:42 -- common/autotest_common.sh@960 -- # wait 111661 00:10:10.670 00:10:10.670 real 0m3.372s 00:10:10.670 user 0m3.482s 00:10:10.670 sys 0m0.504s 00:10:10.670 00:29:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:10.670 00:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.670 ************************************ 00:10:10.670 END TEST alias_rpc 00:10:10.670 ************************************ 00:10:10.670 00:29:44 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:10:10.670 00:29:44 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:10.670 00:29:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:10.670 00:29:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:10.670 00:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.929 ************************************ 00:10:10.929 START TEST spdkcli_tcp 00:10:10.929 ************************************ 00:10:10.929 00:29:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:10.929 * Looking for test storage... 00:10:10.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:10.929 00:29:44 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:10.929 00:29:44 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:10.929 00:29:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:10.929 00:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=111767 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@27 -- # waitforlisten 111767 00:10:10.929 00:29:44 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:10.929 00:29:44 -- common/autotest_common.sh@817 -- # '[' -z 111767 ']' 00:10:10.929 00:29:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.929 00:29:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:10.929 00:29:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.929 00:29:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:10.929 00:29:44 -- common/autotest_common.sh@10 -- # set +x 00:10:10.929 [2024-04-27 00:29:44.440249] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:10.929 [2024-04-27 00:29:44.440432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111767 ] 00:10:11.192 [2024-04-27 00:29:44.610204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:11.465 [2024-04-27 00:29:44.804282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.465 [2024-04-27 00:29:44.804290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.032 00:29:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:12.032 00:29:45 -- common/autotest_common.sh@850 -- # return 0 00:10:12.032 00:29:45 -- spdkcli/tcp.sh@31 -- # socat_pid=111789 00:10:12.032 00:29:45 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:12.032 00:29:45 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:12.291 [ 00:10:12.291 "spdk_get_version", 00:10:12.291 "rpc_get_methods", 00:10:12.291 "keyring_get_keys", 00:10:12.291 "trace_get_info", 00:10:12.291 "trace_get_tpoint_group_mask", 00:10:12.291 "trace_disable_tpoint_group", 00:10:12.291 "trace_enable_tpoint_group", 00:10:12.291 "trace_clear_tpoint_mask", 00:10:12.291 "trace_set_tpoint_mask", 00:10:12.291 "framework_get_pci_devices", 00:10:12.291 "framework_get_config", 00:10:12.291 "framework_get_subsystems", 00:10:12.291 "iobuf_get_stats", 00:10:12.291 "iobuf_set_options", 00:10:12.291 "sock_get_default_impl", 00:10:12.291 "sock_set_default_impl", 00:10:12.291 "sock_impl_set_options", 00:10:12.291 "sock_impl_get_options", 00:10:12.291 "vmd_rescan", 00:10:12.291 "vmd_remove_device", 00:10:12.291 "vmd_enable", 00:10:12.291 "accel_get_stats", 00:10:12.291 "accel_set_options", 00:10:12.291 "accel_set_driver", 00:10:12.291 "accel_crypto_key_destroy", 00:10:12.291 "accel_crypto_keys_get", 00:10:12.291 "accel_crypto_key_create", 00:10:12.291 "accel_assign_opc", 00:10:12.291 "accel_get_module_info", 00:10:12.291 "accel_get_opc_assignments", 00:10:12.291 "notify_get_notifications", 00:10:12.291 "notify_get_types", 00:10:12.291 "bdev_get_histogram", 00:10:12.291 "bdev_enable_histogram", 00:10:12.291 "bdev_set_qos_limit", 00:10:12.291 "bdev_set_qd_sampling_period", 00:10:12.291 "bdev_get_bdevs", 00:10:12.291 "bdev_reset_iostat", 00:10:12.291 "bdev_get_iostat", 00:10:12.291 "bdev_examine", 00:10:12.291 "bdev_wait_for_examine", 00:10:12.291 "bdev_set_options", 00:10:12.291 "scsi_get_devices", 00:10:12.291 "thread_set_cpumask", 00:10:12.291 "framework_get_scheduler", 00:10:12.291 "framework_set_scheduler", 00:10:12.291 "framework_get_reactors", 00:10:12.291 "thread_get_io_channels", 00:10:12.291 "thread_get_pollers", 00:10:12.291 "thread_get_stats", 00:10:12.291 "framework_monitor_context_switch", 00:10:12.291 "spdk_kill_instance", 00:10:12.291 "log_enable_timestamps", 00:10:12.291 "log_get_flags", 00:10:12.291 "log_clear_flag", 00:10:12.291 "log_set_flag", 00:10:12.291 "log_get_level", 00:10:12.291 "log_set_level", 00:10:12.291 "log_get_print_level", 00:10:12.291 "log_set_print_level", 00:10:12.291 "framework_enable_cpumask_locks", 00:10:12.291 "framework_disable_cpumask_locks", 00:10:12.291 "framework_wait_init", 00:10:12.291 "framework_start_init", 00:10:12.291 "virtio_blk_create_transport", 00:10:12.291 "virtio_blk_get_transports", 00:10:12.291 "vhost_controller_set_coalescing", 00:10:12.291 "vhost_get_controllers", 00:10:12.291 "vhost_delete_controller", 00:10:12.291 "vhost_create_blk_controller", 00:10:12.291 "vhost_scsi_controller_remove_target", 00:10:12.291 "vhost_scsi_controller_add_target", 00:10:12.291 "vhost_start_scsi_controller", 00:10:12.291 "vhost_create_scsi_controller", 00:10:12.291 "nbd_get_disks", 00:10:12.291 "nbd_stop_disk", 00:10:12.291 "nbd_start_disk", 00:10:12.291 "env_dpdk_get_mem_stats", 00:10:12.291 "nvmf_subsystem_get_listeners", 00:10:12.291 "nvmf_subsystem_get_qpairs", 00:10:12.291 "nvmf_subsystem_get_controllers", 00:10:12.291 "nvmf_get_stats", 00:10:12.291 "nvmf_get_transports", 00:10:12.291 "nvmf_create_transport", 00:10:12.291 "nvmf_get_targets", 00:10:12.291 "nvmf_delete_target", 00:10:12.291 "nvmf_create_target", 00:10:12.291 "nvmf_subsystem_allow_any_host", 00:10:12.291 "nvmf_subsystem_remove_host", 00:10:12.291 "nvmf_subsystem_add_host", 00:10:12.291 "nvmf_ns_remove_host", 00:10:12.291 "nvmf_ns_add_host", 00:10:12.291 "nvmf_subsystem_remove_ns", 00:10:12.291 "nvmf_subsystem_add_ns", 00:10:12.291 "nvmf_subsystem_listener_set_ana_state", 00:10:12.291 "nvmf_discovery_get_referrals", 00:10:12.291 "nvmf_discovery_remove_referral", 00:10:12.291 "nvmf_discovery_add_referral", 00:10:12.291 "nvmf_subsystem_remove_listener", 00:10:12.291 "nvmf_subsystem_add_listener", 00:10:12.291 "nvmf_delete_subsystem", 00:10:12.291 "nvmf_create_subsystem", 00:10:12.291 "nvmf_get_subsystems", 00:10:12.291 "nvmf_set_crdt", 00:10:12.291 "nvmf_set_config", 00:10:12.292 "nvmf_set_max_subsystems", 00:10:12.292 "iscsi_get_histogram", 00:10:12.292 "iscsi_enable_histogram", 00:10:12.292 "iscsi_set_options", 00:10:12.292 "iscsi_get_auth_groups", 00:10:12.292 "iscsi_auth_group_remove_secret", 00:10:12.292 "iscsi_auth_group_add_secret", 00:10:12.292 "iscsi_delete_auth_group", 00:10:12.292 "iscsi_create_auth_group", 00:10:12.292 "iscsi_set_discovery_auth", 00:10:12.292 "iscsi_get_options", 00:10:12.292 "iscsi_target_node_request_logout", 00:10:12.292 "iscsi_target_node_set_redirect", 00:10:12.292 "iscsi_target_node_set_auth", 00:10:12.292 "iscsi_target_node_add_lun", 00:10:12.292 "iscsi_get_stats", 00:10:12.292 "iscsi_get_connections", 00:10:12.292 "iscsi_portal_group_set_auth", 00:10:12.292 "iscsi_start_portal_group", 00:10:12.292 "iscsi_delete_portal_group", 00:10:12.292 "iscsi_create_portal_group", 00:10:12.292 "iscsi_get_portal_groups", 00:10:12.292 "iscsi_delete_target_node", 00:10:12.292 "iscsi_target_node_remove_pg_ig_maps", 00:10:12.292 "iscsi_target_node_add_pg_ig_maps", 00:10:12.292 "iscsi_create_target_node", 00:10:12.292 "iscsi_get_target_nodes", 00:10:12.292 "iscsi_delete_initiator_group", 00:10:12.292 "iscsi_initiator_group_remove_initiators", 00:10:12.292 "iscsi_initiator_group_add_initiators", 00:10:12.292 "iscsi_create_initiator_group", 00:10:12.292 "iscsi_get_initiator_groups", 00:10:12.292 "keyring_linux_set_options", 00:10:12.292 "keyring_file_remove_key", 00:10:12.292 "keyring_file_add_key", 00:10:12.292 "iaa_scan_accel_module", 00:10:12.292 "dsa_scan_accel_module", 00:10:12.292 "ioat_scan_accel_module", 00:10:12.292 "accel_error_inject_error", 00:10:12.292 "bdev_iscsi_delete", 00:10:12.292 "bdev_iscsi_create", 00:10:12.292 "bdev_iscsi_set_options", 00:10:12.292 "bdev_virtio_attach_controller", 00:10:12.292 "bdev_virtio_scsi_get_devices", 00:10:12.292 "bdev_virtio_detach_controller", 00:10:12.292 "bdev_virtio_blk_set_hotplug", 00:10:12.292 "bdev_ftl_set_property", 00:10:12.292 "bdev_ftl_get_properties", 00:10:12.292 "bdev_ftl_get_stats", 00:10:12.292 "bdev_ftl_unmap", 00:10:12.292 "bdev_ftl_unload", 00:10:12.292 "bdev_ftl_delete", 00:10:12.292 "bdev_ftl_load", 00:10:12.292 "bdev_ftl_create", 00:10:12.292 "bdev_aio_delete", 00:10:12.292 "bdev_aio_rescan", 00:10:12.292 "bdev_aio_create", 00:10:12.292 "blobfs_create", 00:10:12.292 "blobfs_detect", 00:10:12.292 "blobfs_set_cache_size", 00:10:12.292 "bdev_zone_block_delete", 00:10:12.292 "bdev_zone_block_create", 00:10:12.292 "bdev_delay_delete", 00:10:12.292 "bdev_delay_create", 00:10:12.292 "bdev_delay_update_latency", 00:10:12.292 "bdev_split_delete", 00:10:12.292 "bdev_split_create", 00:10:12.292 "bdev_error_inject_error", 00:10:12.292 "bdev_error_delete", 00:10:12.292 "bdev_error_create", 00:10:12.292 "bdev_raid_set_options", 00:10:12.292 "bdev_raid_remove_base_bdev", 00:10:12.292 "bdev_raid_add_base_bdev", 00:10:12.292 "bdev_raid_delete", 00:10:12.292 "bdev_raid_create", 00:10:12.292 "bdev_raid_get_bdevs", 00:10:12.292 "bdev_lvol_grow_lvstore", 00:10:12.292 "bdev_lvol_get_lvols", 00:10:12.292 "bdev_lvol_get_lvstores", 00:10:12.292 "bdev_lvol_delete", 00:10:12.292 "bdev_lvol_set_read_only", 00:10:12.292 "bdev_lvol_resize", 00:10:12.292 "bdev_lvol_decouple_parent", 00:10:12.292 "bdev_lvol_inflate", 00:10:12.292 "bdev_lvol_rename", 00:10:12.292 "bdev_lvol_clone_bdev", 00:10:12.292 "bdev_lvol_clone", 00:10:12.292 "bdev_lvol_snapshot", 00:10:12.292 "bdev_lvol_create", 00:10:12.292 "bdev_lvol_delete_lvstore", 00:10:12.292 "bdev_lvol_rename_lvstore", 00:10:12.292 "bdev_lvol_create_lvstore", 00:10:12.292 "bdev_passthru_delete", 00:10:12.292 "bdev_passthru_create", 00:10:12.292 "bdev_nvme_cuse_unregister", 00:10:12.292 "bdev_nvme_cuse_register", 00:10:12.292 "bdev_opal_new_user", 00:10:12.292 "bdev_opal_set_lock_state", 00:10:12.292 "bdev_opal_delete", 00:10:12.292 "bdev_opal_get_info", 00:10:12.292 "bdev_opal_create", 00:10:12.292 "bdev_nvme_opal_revert", 00:10:12.292 "bdev_nvme_opal_init", 00:10:12.292 "bdev_nvme_send_cmd", 00:10:12.292 "bdev_nvme_get_path_iostat", 00:10:12.292 "bdev_nvme_get_mdns_discovery_info", 00:10:12.292 "bdev_nvme_stop_mdns_discovery", 00:10:12.292 "bdev_nvme_start_mdns_discovery", 00:10:12.292 "bdev_nvme_set_multipath_policy", 00:10:12.292 "bdev_nvme_set_preferred_path", 00:10:12.292 "bdev_nvme_get_io_paths", 00:10:12.292 "bdev_nvme_remove_error_injection", 00:10:12.292 "bdev_nvme_add_error_injection", 00:10:12.292 "bdev_nvme_get_discovery_info", 00:10:12.292 "bdev_nvme_stop_discovery", 00:10:12.292 "bdev_nvme_start_discovery", 00:10:12.292 "bdev_nvme_get_controller_health_info", 00:10:12.292 "bdev_nvme_disable_controller", 00:10:12.292 "bdev_nvme_enable_controller", 00:10:12.292 "bdev_nvme_reset_controller", 00:10:12.292 "bdev_nvme_get_transport_statistics", 00:10:12.292 "bdev_nvme_apply_firmware", 00:10:12.292 "bdev_nvme_detach_controller", 00:10:12.292 "bdev_nvme_get_controllers", 00:10:12.292 "bdev_nvme_attach_controller", 00:10:12.292 "bdev_nvme_set_hotplug", 00:10:12.292 "bdev_nvme_set_options", 00:10:12.292 "bdev_null_resize", 00:10:12.292 "bdev_null_delete", 00:10:12.292 "bdev_null_create", 00:10:12.292 "bdev_malloc_delete", 00:10:12.292 "bdev_malloc_create" 00:10:12.292 ] 00:10:12.292 00:29:45 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:12.292 00:29:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:12.292 00:29:45 -- common/autotest_common.sh@10 -- # set +x 00:10:12.292 00:29:45 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:12.292 00:29:45 -- spdkcli/tcp.sh@38 -- # killprocess 111767 00:10:12.292 00:29:45 -- common/autotest_common.sh@936 -- # '[' -z 111767 ']' 00:10:12.292 00:29:45 -- common/autotest_common.sh@940 -- # kill -0 111767 00:10:12.292 00:29:45 -- common/autotest_common.sh@941 -- # uname 00:10:12.292 00:29:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:12.292 00:29:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111767 00:10:12.292 00:29:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:12.292 killing process with pid 111767 00:10:12.292 00:29:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:12.292 00:29:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111767' 00:10:12.292 00:29:45 -- common/autotest_common.sh@955 -- # kill 111767 00:10:12.292 00:29:45 -- common/autotest_common.sh@960 -- # wait 111767 00:10:14.196 ************************************ 00:10:14.196 END TEST spdkcli_tcp 00:10:14.196 ************************************ 00:10:14.196 00:10:14.196 real 0m3.427s 00:10:14.196 user 0m6.085s 00:10:14.196 sys 0m0.551s 00:10:14.196 00:29:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:14.196 00:29:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.196 00:29:47 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:14.196 00:29:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:14.196 00:29:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.196 00:29:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 ************************************ 00:10:14.455 START TEST dpdk_mem_utility 00:10:14.455 ************************************ 00:10:14.455 00:29:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:14.455 * Looking for test storage... 00:10:14.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:14.455 00:29:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:14.455 00:29:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=111893 00:10:14.455 00:29:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:14.455 00:29:47 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 111893 00:10:14.455 00:29:47 -- common/autotest_common.sh@817 -- # '[' -z 111893 ']' 00:10:14.455 00:29:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.455 00:29:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:14.455 00:29:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.455 00:29:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:14.455 00:29:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.455 [2024-04-27 00:29:47.946479] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:14.455 [2024-04-27 00:29:47.946676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111893 ] 00:10:14.713 [2024-04-27 00:29:48.101497] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.713 [2024-04-27 00:29:48.273638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.653 00:29:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:15.653 00:29:48 -- common/autotest_common.sh@850 -- # return 0 00:10:15.653 00:29:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:15.653 00:29:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:15.653 00:29:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:15.653 00:29:48 -- common/autotest_common.sh@10 -- # set +x 00:10:15.653 { 00:10:15.653 "filename": "/tmp/spdk_mem_dump.txt" 00:10:15.653 } 00:10:15.653 00:29:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:15.653 00:29:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:15.653 DPDK memory size 820.000000 MiB in 1 heap(s) 00:10:15.653 1 heaps totaling size 820.000000 MiB 00:10:15.653 size: 820.000000 MiB heap id: 0 00:10:15.653 end heaps---------- 00:10:15.653 8 mempools totaling size 598.116089 MiB 00:10:15.653 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:15.653 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:15.653 size: 84.521057 MiB name: bdev_io_111893 00:10:15.653 size: 51.011292 MiB name: evtpool_111893 00:10:15.653 size: 50.003479 MiB name: msgpool_111893 00:10:15.653 size: 21.763794 MiB name: PDU_Pool 00:10:15.653 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:15.653 size: 0.026123 MiB name: Session_Pool 00:10:15.653 end mempools------- 00:10:15.653 6 memzones totaling size 4.142822 MiB 00:10:15.653 size: 1.000366 MiB name: RG_ring_0_111893 00:10:15.653 size: 1.000366 MiB name: RG_ring_1_111893 00:10:15.653 size: 1.000366 MiB name: RG_ring_4_111893 00:10:15.653 size: 1.000366 MiB name: RG_ring_5_111893 00:10:15.653 size: 0.125366 MiB name: RG_ring_2_111893 00:10:15.653 size: 0.015991 MiB name: RG_ring_3_111893 00:10:15.653 end memzones------- 00:10:15.653 00:29:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:15.653 heap id: 0 total size: 820.000000 MiB number of busy elements: 223 number of free elements: 18 00:10:15.653 list of free elements. size: 18.470459 MiB 00:10:15.653 element at address: 0x200000400000 with size: 1.999451 MiB 00:10:15.653 element at address: 0x200000800000 with size: 1.996887 MiB 00:10:15.653 element at address: 0x200007000000 with size: 1.995972 MiB 00:10:15.653 element at address: 0x20000b200000 with size: 1.995972 MiB 00:10:15.653 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:15.653 element at address: 0x200019500040 with size: 0.999939 MiB 00:10:15.653 element at address: 0x200019600000 with size: 0.999329 MiB 00:10:15.653 element at address: 0x200003e00000 with size: 0.996094 MiB 00:10:15.653 element at address: 0x200032200000 with size: 0.994324 MiB 00:10:15.653 element at address: 0x200018e00000 with size: 0.959656 MiB 00:10:15.653 element at address: 0x200019900040 with size: 0.937256 MiB 00:10:15.653 element at address: 0x200000200000 with size: 0.834106 MiB 00:10:15.653 element at address: 0x20001b000000 with size: 0.561951 MiB 00:10:15.653 element at address: 0x200019200000 with size: 0.489197 MiB 00:10:15.653 element at address: 0x200019a00000 with size: 0.485413 MiB 00:10:15.653 element at address: 0x200013800000 with size: 0.469116 MiB 00:10:15.653 element at address: 0x200028400000 with size: 0.399719 MiB 00:10:15.653 element at address: 0x200003a00000 with size: 0.356140 MiB 00:10:15.653 list of standard malloc elements. size: 199.265137 MiB 00:10:15.653 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:10:15.653 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:10:15.653 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:15.653 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:15.653 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:10:15.653 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:15.653 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:10:15.653 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:15.653 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:10:15.654 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:10:15.654 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:10:15.654 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200003aff980 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200003affa80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200003eff000 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200013878180 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200013878280 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200013878380 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200013878480 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200013878580 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:10:15.654 element at address: 0x200019abc680 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:10:15.654 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:10:15.655 element at address: 0x200028466540 with size: 0.000244 MiB 00:10:15.655 element at address: 0x200028466640 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846d300 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846d580 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846d680 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846d780 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846d880 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846d980 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846da80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846db80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846de80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846df80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e080 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e180 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e280 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e380 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e480 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e580 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e680 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e780 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e880 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846e980 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f080 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f180 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f280 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f380 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f480 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f580 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f680 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f780 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f880 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846f980 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:10:15.655 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:10:15.655 list of memzone associated elements. size: 602.264404 MiB 00:10:15.655 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:10:15.655 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:15.655 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:10:15.655 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:15.655 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:10:15.655 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_111893_0 00:10:15.655 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:10:15.655 associated memzone info: size: 48.002930 MiB name: MP_evtpool_111893_0 00:10:15.655 element at address: 0x200003fff340 with size: 48.003113 MiB 00:10:15.655 associated memzone info: size: 48.002930 MiB name: MP_msgpool_111893_0 00:10:15.655 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:10:15.655 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:15.655 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:10:15.655 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:15.655 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:10:15.655 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_111893 00:10:15.655 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:10:15.655 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_111893 00:10:15.655 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:15.655 associated memzone info: size: 1.007996 MiB name: MP_evtpool_111893 00:10:15.655 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:10:15.655 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:15.656 element at address: 0x200019abc780 with size: 1.008179 MiB 00:10:15.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:15.656 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:15.656 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:15.656 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:10:15.656 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:15.656 element at address: 0x200003eff100 with size: 1.000549 MiB 00:10:15.656 associated memzone info: size: 1.000366 MiB name: RG_ring_0_111893 00:10:15.656 element at address: 0x200003affb80 with size: 1.000549 MiB 00:10:15.656 associated memzone info: size: 1.000366 MiB name: RG_ring_1_111893 00:10:15.656 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:10:15.656 associated memzone info: size: 1.000366 MiB name: RG_ring_4_111893 00:10:15.656 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:10:15.656 associated memzone info: size: 1.000366 MiB name: RG_ring_5_111893 00:10:15.656 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:10:15.656 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_111893 00:10:15.656 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:10:15.656 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:15.656 element at address: 0x200013878680 with size: 0.500549 MiB 00:10:15.656 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:15.656 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:10:15.656 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:15.656 element at address: 0x200003adf740 with size: 0.125549 MiB 00:10:15.656 associated memzone info: size: 0.125366 MiB name: RG_ring_2_111893 00:10:15.656 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:10:15.656 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:15.656 element at address: 0x200028466740 with size: 0.023804 MiB 00:10:15.656 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:15.656 element at address: 0x200003adb500 with size: 0.016174 MiB 00:10:15.656 associated memzone info: size: 0.015991 MiB name: RG_ring_3_111893 00:10:15.656 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:10:15.656 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:15.656 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:10:15.656 associated memzone info: size: 0.000183 MiB name: MP_msgpool_111893 00:10:15.656 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:10:15.656 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_111893 00:10:15.656 element at address: 0x20002846d400 with size: 0.000366 MiB 00:10:15.656 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:15.656 00:29:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:15.656 00:29:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 111893 00:10:15.656 00:29:49 -- common/autotest_common.sh@936 -- # '[' -z 111893 ']' 00:10:15.656 00:29:49 -- common/autotest_common.sh@940 -- # kill -0 111893 00:10:15.656 00:29:49 -- common/autotest_common.sh@941 -- # uname 00:10:15.656 00:29:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:15.656 00:29:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111893 00:10:15.656 00:29:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:15.656 00:29:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:15.656 killing process with pid 111893 00:10:15.656 00:29:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111893' 00:10:15.656 00:29:49 -- common/autotest_common.sh@955 -- # kill 111893 00:10:15.656 00:29:49 -- common/autotest_common.sh@960 -- # wait 111893 00:10:17.560 ************************************ 00:10:17.560 END TEST dpdk_mem_utility 00:10:17.560 ************************************ 00:10:17.560 00:10:17.560 real 0m3.183s 00:10:17.560 user 0m3.285s 00:10:17.560 sys 0m0.476s 00:10:17.560 00:29:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.560 00:29:50 -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 00:29:51 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:17.560 00:29:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:17.560 00:29:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.560 00:29:51 -- common/autotest_common.sh@10 -- # set +x 00:10:17.560 ************************************ 00:10:17.560 START TEST event 00:10:17.560 ************************************ 00:10:17.560 00:29:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:17.560 * Looking for test storage... 00:10:17.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:17.819 00:29:51 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:17.819 00:29:51 -- bdev/nbd_common.sh@6 -- # set -e 00:10:17.819 00:29:51 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:17.819 00:29:51 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:17.819 00:29:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.819 00:29:51 -- common/autotest_common.sh@10 -- # set +x 00:10:17.819 ************************************ 00:10:17.819 START TEST event_perf 00:10:17.819 ************************************ 00:10:17.819 00:29:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:17.819 Running I/O for 1 seconds...[2024-04-27 00:29:51.239945] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:17.819 [2024-04-27 00:29:51.240128] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112004 ] 00:10:18.077 [2024-04-27 00:29:51.425783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.077 [2024-04-27 00:29:51.617286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.077 [2024-04-27 00:29:51.617376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.077 [2024-04-27 00:29:51.617521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.077 [2024-04-27 00:29:51.617530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.457 Running I/O for 1 seconds... 00:10:19.457 lcore 0: 132149 00:10:19.457 lcore 1: 132152 00:10:19.457 lcore 2: 132148 00:10:19.457 lcore 3: 132147 00:10:19.457 done. 00:10:19.457 00:10:19.457 real 0m1.766s 00:10:19.457 user 0m4.527s 00:10:19.457 sys 0m0.138s 00:10:19.457 00:29:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.457 00:29:52 -- common/autotest_common.sh@10 -- # set +x 00:10:19.457 ************************************ 00:10:19.457 END TEST event_perf 00:10:19.457 ************************************ 00:10:19.457 00:29:53 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:19.457 00:29:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:19.457 00:29:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.457 00:29:53 -- common/autotest_common.sh@10 -- # set +x 00:10:19.716 ************************************ 00:10:19.716 START TEST event_reactor 00:10:19.716 ************************************ 00:10:19.716 00:29:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:19.716 [2024-04-27 00:29:53.093167] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:19.716 [2024-04-27 00:29:53.093360] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112061 ] 00:10:19.716 [2024-04-27 00:29:53.260021] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.974 [2024-04-27 00:29:53.424177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.351 test_start 00:10:21.351 oneshot 00:10:21.351 tick 100 00:10:21.351 tick 100 00:10:21.351 tick 250 00:10:21.351 tick 100 00:10:21.351 tick 100 00:10:21.351 tick 100 00:10:21.351 tick 250 00:10:21.351 tick 500 00:10:21.351 tick 100 00:10:21.351 tick 100 00:10:21.351 tick 250 00:10:21.351 tick 100 00:10:21.351 tick 100 00:10:21.351 test_end 00:10:21.351 00:10:21.351 real 0m1.728s 00:10:21.351 user 0m1.504s 00:10:21.351 sys 0m0.124s 00:10:21.351 00:29:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:21.351 00:29:54 -- common/autotest_common.sh@10 -- # set +x 00:10:21.351 ************************************ 00:10:21.351 END TEST event_reactor 00:10:21.351 ************************************ 00:10:21.351 00:29:54 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:21.351 00:29:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:21.351 00:29:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.351 00:29:54 -- common/autotest_common.sh@10 -- # set +x 00:10:21.352 ************************************ 00:10:21.352 START TEST event_reactor_perf 00:10:21.352 ************************************ 00:10:21.352 00:29:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:21.352 [2024-04-27 00:29:54.898011] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:21.352 [2024-04-27 00:29:54.898226] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112117 ] 00:10:21.610 [2024-04-27 00:29:55.067596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.869 [2024-04-27 00:29:55.258629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.244 test_start 00:10:23.244 test_end 00:10:23.244 Performance: 389758 events per second 00:10:23.244 00:10:23.244 real 0m1.735s 00:10:23.244 user 0m1.527s 00:10:23.244 sys 0m0.108s 00:10:23.244 00:29:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.244 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:23.244 ************************************ 00:10:23.244 END TEST event_reactor_perf 00:10:23.244 ************************************ 00:10:23.244 00:29:56 -- event/event.sh@49 -- # uname -s 00:10:23.244 00:29:56 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:23.244 00:29:56 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:23.244 00:29:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:23.244 00:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.244 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:23.244 ************************************ 00:10:23.244 START TEST event_scheduler 00:10:23.244 ************************************ 00:10:23.244 00:29:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:23.244 * Looking for test storage... 00:10:23.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:23.244 00:29:56 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:23.244 00:29:56 -- scheduler/scheduler.sh@35 -- # scheduler_pid=112195 00:10:23.244 00:29:56 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:23.244 00:29:56 -- scheduler/scheduler.sh@37 -- # waitforlisten 112195 00:10:23.244 00:29:56 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:23.244 00:29:56 -- common/autotest_common.sh@817 -- # '[' -z 112195 ']' 00:10:23.244 00:29:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.244 00:29:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:23.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.244 00:29:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.244 00:29:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:23.244 00:29:56 -- common/autotest_common.sh@10 -- # set +x 00:10:23.502 [2024-04-27 00:29:56.862007] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:23.502 [2024-04-27 00:29:56.862256] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112195 ] 00:10:23.502 [2024-04-27 00:29:57.066211] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.760 [2024-04-27 00:29:57.319224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.760 [2024-04-27 00:29:57.319481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.760 [2024-04-27 00:29:57.320052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.760 [2024-04-27 00:29:57.320097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.326 00:29:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:24.326 00:29:57 -- common/autotest_common.sh@850 -- # return 0 00:10:24.326 00:29:57 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:24.326 00:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.326 00:29:57 -- common/autotest_common.sh@10 -- # set +x 00:10:24.326 POWER: Env isn't set yet! 00:10:24.326 POWER: Attempting to initialise ACPI cpufreq power management... 00:10:24.326 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:24.326 POWER: Cannot set governor of lcore 0 to userspace 00:10:24.326 POWER: Attempting to initialise PSTAT power management... 00:10:24.326 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:24.326 POWER: Cannot set governor of lcore 0 to performance 00:10:24.326 POWER: Attempting to initialise AMD PSTATE power management... 00:10:24.326 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:24.326 POWER: Cannot set governor of lcore 0 to userspace 00:10:24.327 POWER: Attempting to initialise CPPC power management... 00:10:24.327 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:24.327 POWER: Cannot set governor of lcore 0 to userspace 00:10:24.327 POWER: Attempting to initialise VM power management... 00:10:24.327 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:24.327 POWER: Unable to set Power Management Environment for lcore 0 00:10:24.327 [2024-04-27 00:29:57.838347] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:10:24.327 [2024-04-27 00:29:57.838418] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:10:24.327 [2024-04-27 00:29:57.838461] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:10:24.327 00:29:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.327 00:29:57 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:24.327 00:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.327 00:29:57 -- common/autotest_common.sh@10 -- # set +x 00:10:24.585 [2024-04-27 00:29:58.104878] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:24.585 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.585 00:29:58 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:24.585 00:29:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:24.585 00:29:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.585 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.585 ************************************ 00:10:24.585 START TEST scheduler_create_thread 00:10:24.585 ************************************ 00:10:24.585 00:29:58 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:10:24.585 00:29:58 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:24.585 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.585 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.585 2 00:10:24.585 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.585 00:29:58 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:24.585 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.585 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.843 3 00:10:24.843 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.843 00:29:58 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:24.843 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.843 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.843 4 00:10:24.843 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.843 00:29:58 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:24.843 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.843 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.843 5 00:10:24.843 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.843 00:29:58 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:24.843 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.843 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.843 6 00:10:24.843 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:24.844 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.844 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 7 00:10:24.844 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:24.844 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.844 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 8 00:10:24.844 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:24.844 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.844 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 9 00:10:24.844 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:24.844 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.844 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 10 00:10:24.844 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:24.844 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.844 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:24.844 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.844 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:24.844 00:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:24.844 00:29:58 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:24.844 00:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:24.844 00:29:58 -- common/autotest_common.sh@10 -- # set +x 00:10:25.803 00:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.803 00:29:59 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:25.803 00:29:59 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:25.803 00:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.803 00:29:59 -- common/autotest_common.sh@10 -- # set +x 00:10:26.740 00:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.740 00:10:26.740 real 0m2.149s 00:10:26.740 user 0m0.006s 00:10:26.740 sys 0m0.010s 00:10:26.740 ************************************ 00:10:26.740 END TEST scheduler_create_thread 00:10:26.740 ************************************ 00:10:26.740 00:30:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:26.740 00:30:00 -- common/autotest_common.sh@10 -- # set +x 00:10:26.999 00:30:00 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:26.999 00:30:00 -- scheduler/scheduler.sh@46 -- # killprocess 112195 00:10:26.999 00:30:00 -- common/autotest_common.sh@936 -- # '[' -z 112195 ']' 00:10:26.999 00:30:00 -- common/autotest_common.sh@940 -- # kill -0 112195 00:10:26.999 00:30:00 -- common/autotest_common.sh@941 -- # uname 00:10:26.999 00:30:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:26.999 00:30:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112195 00:10:26.999 00:30:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:26.999 killing process with pid 112195 00:10:26.999 00:30:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:26.999 00:30:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112195' 00:10:26.999 00:30:00 -- common/autotest_common.sh@955 -- # kill 112195 00:10:26.999 00:30:00 -- common/autotest_common.sh@960 -- # wait 112195 00:10:27.258 [2024-04-27 00:30:00.781700] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:28.634 00:10:28.634 real 0m5.139s 00:10:28.634 user 0m8.462s 00:10:28.634 sys 0m0.488s 00:10:28.634 00:30:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:28.634 00:30:01 -- common/autotest_common.sh@10 -- # set +x 00:10:28.634 ************************************ 00:10:28.634 END TEST event_scheduler 00:10:28.634 ************************************ 00:10:28.634 00:30:01 -- event/event.sh@51 -- # modprobe -n nbd 00:10:28.634 00:30:01 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:28.634 00:30:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:28.634 00:30:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.634 00:30:01 -- common/autotest_common.sh@10 -- # set +x 00:10:28.634 ************************************ 00:10:28.634 START TEST app_repeat 00:10:28.634 ************************************ 00:10:28.634 00:30:01 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:10:28.634 00:30:01 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.634 00:30:01 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.634 00:30:01 -- event/event.sh@13 -- # local nbd_list 00:10:28.634 00:30:01 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.634 00:30:01 -- event/event.sh@14 -- # local bdev_list 00:10:28.634 00:30:01 -- event/event.sh@15 -- # local repeat_times=4 00:10:28.634 00:30:01 -- event/event.sh@17 -- # modprobe nbd 00:10:28.634 00:30:01 -- event/event.sh@19 -- # repeat_pid=112326 00:10:28.634 00:30:01 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:28.634 Process app_repeat pid: 112326 00:10:28.634 00:30:01 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112326' 00:10:28.634 00:30:01 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:28.634 00:30:01 -- event/event.sh@23 -- # for i in {0..2} 00:10:28.634 spdk_app_start Round 0 00:10:28.634 00:30:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:28.634 00:30:01 -- event/event.sh@25 -- # waitforlisten 112326 /var/tmp/spdk-nbd.sock 00:10:28.634 00:30:01 -- common/autotest_common.sh@817 -- # '[' -z 112326 ']' 00:10:28.634 00:30:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:28.634 00:30:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:28.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:28.634 00:30:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:28.634 00:30:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:28.634 00:30:01 -- common/autotest_common.sh@10 -- # set +x 00:10:28.634 [2024-04-27 00:30:01.977364] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:28.634 [2024-04-27 00:30:01.977572] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112326 ] 00:10:28.634 [2024-04-27 00:30:02.138527] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:28.893 [2024-04-27 00:30:02.348222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.893 [2024-04-27 00:30:02.348217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.459 00:30:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:29.459 00:30:03 -- common/autotest_common.sh@850 -- # return 0 00:10:29.459 00:30:03 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:30.026 Malloc0 00:10:30.026 00:30:03 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:30.285 Malloc1 00:10:30.285 00:30:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@12 -- # local i 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:30.285 00:30:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:30.543 /dev/nbd0 00:10:30.543 00:30:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:30.543 00:30:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:30.543 00:30:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:30.543 00:30:04 -- common/autotest_common.sh@855 -- # local i 00:10:30.543 00:30:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:30.543 00:30:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:30.543 00:30:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:30.543 00:30:04 -- common/autotest_common.sh@859 -- # break 00:10:30.543 00:30:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:30.543 00:30:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:30.543 00:30:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:30.543 1+0 records in 00:10:30.543 1+0 records out 00:10:30.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452893 s, 9.0 MB/s 00:10:30.543 00:30:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:30.543 00:30:04 -- common/autotest_common.sh@872 -- # size=4096 00:10:30.543 00:30:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:30.543 00:30:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:30.543 00:30:04 -- common/autotest_common.sh@875 -- # return 0 00:10:30.543 00:30:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:30.544 00:30:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:30.544 00:30:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:30.802 /dev/nbd1 00:10:30.802 00:30:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:30.802 00:30:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:30.802 00:30:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:30.802 00:30:04 -- common/autotest_common.sh@855 -- # local i 00:10:30.802 00:30:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:30.802 00:30:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:30.802 00:30:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:30.802 00:30:04 -- common/autotest_common.sh@859 -- # break 00:10:30.802 00:30:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:30.802 00:30:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:30.802 00:30:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:30.802 1+0 records in 00:10:30.802 1+0 records out 00:10:30.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350231 s, 11.7 MB/s 00:10:30.802 00:30:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:30.802 00:30:04 -- common/autotest_common.sh@872 -- # size=4096 00:10:30.802 00:30:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:30.802 00:30:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:30.802 00:30:04 -- common/autotest_common.sh@875 -- # return 0 00:10:30.802 00:30:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:30.802 00:30:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:30.802 00:30:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:30.802 00:30:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.803 00:30:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:31.061 { 00:10:31.061 "nbd_device": "/dev/nbd0", 00:10:31.061 "bdev_name": "Malloc0" 00:10:31.061 }, 00:10:31.061 { 00:10:31.061 "nbd_device": "/dev/nbd1", 00:10:31.061 "bdev_name": "Malloc1" 00:10:31.061 } 00:10:31.061 ]' 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:31.061 { 00:10:31.061 "nbd_device": "/dev/nbd0", 00:10:31.061 "bdev_name": "Malloc0" 00:10:31.061 }, 00:10:31.061 { 00:10:31.061 "nbd_device": "/dev/nbd1", 00:10:31.061 "bdev_name": "Malloc1" 00:10:31.061 } 00:10:31.061 ]' 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:31.061 /dev/nbd1' 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:31.061 /dev/nbd1' 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@65 -- # count=2 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@95 -- # count=2 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:31.061 256+0 records in 00:10:31.061 256+0 records out 00:10:31.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00852265 s, 123 MB/s 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:31.061 00:30:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:31.319 256+0 records in 00:10:31.319 256+0 records out 00:10:31.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242818 s, 43.2 MB/s 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:31.319 256+0 records in 00:10:31.319 256+0 records out 00:10:31.319 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281799 s, 37.2 MB/s 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@51 -- # local i 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.319 00:30:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@41 -- # break 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.578 00:30:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@41 -- # break 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.836 00:30:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@65 -- # true 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@65 -- # count=0 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@104 -- # count=0 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:32.094 00:30:05 -- bdev/nbd_common.sh@109 -- # return 0 00:10:32.094 00:30:05 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:32.671 00:30:05 -- event/event.sh@35 -- # sleep 3 00:10:33.609 [2024-04-27 00:30:06.995787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:33.609 [2024-04-27 00:30:07.182915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.609 [2024-04-27 00:30:07.182926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.868 [2024-04-27 00:30:07.348419] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:33.868 [2024-04-27 00:30:07.348558] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:35.771 00:30:08 -- event/event.sh@23 -- # for i in {0..2} 00:10:35.771 spdk_app_start Round 1 00:10:35.771 00:30:08 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:35.771 00:30:08 -- event/event.sh@25 -- # waitforlisten 112326 /var/tmp/spdk-nbd.sock 00:10:35.771 00:30:08 -- common/autotest_common.sh@817 -- # '[' -z 112326 ']' 00:10:35.771 00:30:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:35.771 00:30:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:35.771 00:30:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:35.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:35.771 00:30:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:35.771 00:30:08 -- common/autotest_common.sh@10 -- # set +x 00:10:35.771 00:30:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:35.771 00:30:09 -- common/autotest_common.sh@850 -- # return 0 00:10:35.771 00:30:09 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:36.030 Malloc0 00:10:36.030 00:30:09 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:36.289 Malloc1 00:10:36.289 00:30:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@12 -- # local i 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:36.289 00:30:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:36.547 /dev/nbd0 00:10:36.547 00:30:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:36.547 00:30:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:36.547 00:30:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:36.547 00:30:10 -- common/autotest_common.sh@855 -- # local i 00:10:36.547 00:30:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:36.547 00:30:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:36.547 00:30:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:36.547 00:30:10 -- common/autotest_common.sh@859 -- # break 00:10:36.547 00:30:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:36.547 00:30:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:36.547 00:30:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:36.547 1+0 records in 00:10:36.547 1+0 records out 00:10:36.547 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475831 s, 8.6 MB/s 00:10:36.547 00:30:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:36.806 00:30:10 -- common/autotest_common.sh@872 -- # size=4096 00:10:36.806 00:30:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:36.806 00:30:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:36.806 00:30:10 -- common/autotest_common.sh@875 -- # return 0 00:10:36.806 00:30:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:36.806 00:30:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:36.806 00:30:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:37.065 /dev/nbd1 00:10:37.065 00:30:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:37.065 00:30:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:37.065 00:30:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:37.065 00:30:10 -- common/autotest_common.sh@855 -- # local i 00:10:37.065 00:30:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:37.065 00:30:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:37.065 00:30:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:37.065 00:30:10 -- common/autotest_common.sh@859 -- # break 00:10:37.065 00:30:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:37.065 00:30:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:37.065 00:30:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:37.065 1+0 records in 00:10:37.065 1+0 records out 00:10:37.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776264 s, 5.3 MB/s 00:10:37.065 00:30:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:37.065 00:30:10 -- common/autotest_common.sh@872 -- # size=4096 00:10:37.065 00:30:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:37.065 00:30:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:37.065 00:30:10 -- common/autotest_common.sh@875 -- # return 0 00:10:37.065 00:30:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.065 00:30:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:37.065 00:30:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.065 00:30:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.065 00:30:10 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:37.324 { 00:10:37.324 "nbd_device": "/dev/nbd0", 00:10:37.324 "bdev_name": "Malloc0" 00:10:37.324 }, 00:10:37.324 { 00:10:37.324 "nbd_device": "/dev/nbd1", 00:10:37.324 "bdev_name": "Malloc1" 00:10:37.324 } 00:10:37.324 ]' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:37.324 { 00:10:37.324 "nbd_device": "/dev/nbd0", 00:10:37.324 "bdev_name": "Malloc0" 00:10:37.324 }, 00:10:37.324 { 00:10:37.324 "nbd_device": "/dev/nbd1", 00:10:37.324 "bdev_name": "Malloc1" 00:10:37.324 } 00:10:37.324 ]' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:37.324 /dev/nbd1' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:37.324 /dev/nbd1' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@65 -- # count=2 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@95 -- # count=2 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:37.324 256+0 records in 00:10:37.324 256+0 records out 00:10:37.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00879683 s, 119 MB/s 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:37.324 256+0 records in 00:10:37.324 256+0 records out 00:10:37.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020738 s, 50.6 MB/s 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:37.324 256+0 records in 00:10:37.324 256+0 records out 00:10:37.324 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280077 s, 37.4 MB/s 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@51 -- # local i 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.324 00:30:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@41 -- # break 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:37.583 00:30:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@41 -- # break 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@45 -- # return 0 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:37.841 00:30:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:38.100 00:30:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:38.100 00:30:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:38.100 00:30:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@65 -- # true 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@65 -- # count=0 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@104 -- # count=0 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:38.358 00:30:11 -- bdev/nbd_common.sh@109 -- # return 0 00:10:38.358 00:30:11 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:38.617 00:30:12 -- event/event.sh@35 -- # sleep 3 00:10:39.994 [2024-04-27 00:30:13.182072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.994 [2024-04-27 00:30:13.341262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.994 [2024-04-27 00:30:13.341276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.994 [2024-04-27 00:30:13.502082] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:39.994 [2024-04-27 00:30:13.502509] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:41.892 spdk_app_start Round 2 00:10:41.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:41.892 00:30:15 -- event/event.sh@23 -- # for i in {0..2} 00:10:41.892 00:30:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:41.892 00:30:15 -- event/event.sh@25 -- # waitforlisten 112326 /var/tmp/spdk-nbd.sock 00:10:41.892 00:30:15 -- common/autotest_common.sh@817 -- # '[' -z 112326 ']' 00:10:41.892 00:30:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:41.892 00:30:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:41.892 00:30:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:41.892 00:30:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:41.892 00:30:15 -- common/autotest_common.sh@10 -- # set +x 00:10:41.892 00:30:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:41.892 00:30:15 -- common/autotest_common.sh@850 -- # return 0 00:10:41.892 00:30:15 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:42.458 Malloc0 00:10:42.459 00:30:15 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:42.459 Malloc1 00:10:42.459 00:30:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@12 -- # local i 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:42.459 00:30:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:42.717 /dev/nbd0 00:10:42.717 00:30:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:42.717 00:30:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:42.717 00:30:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:42.717 00:30:16 -- common/autotest_common.sh@855 -- # local i 00:10:42.717 00:30:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:42.717 00:30:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:42.717 00:30:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:42.717 00:30:16 -- common/autotest_common.sh@859 -- # break 00:10:42.717 00:30:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:42.717 00:30:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:42.717 00:30:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:42.717 1+0 records in 00:10:42.717 1+0 records out 00:10:42.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597088 s, 6.9 MB/s 00:10:42.717 00:30:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.717 00:30:16 -- common/autotest_common.sh@872 -- # size=4096 00:10:42.717 00:30:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:42.976 00:30:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:42.976 00:30:16 -- common/autotest_common.sh@875 -- # return 0 00:10:42.976 00:30:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:42.976 00:30:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:42.976 00:30:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:43.235 /dev/nbd1 00:10:43.235 00:30:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:43.235 00:30:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:43.235 00:30:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:43.235 00:30:16 -- common/autotest_common.sh@855 -- # local i 00:10:43.235 00:30:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:43.235 00:30:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:43.235 00:30:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:43.235 00:30:16 -- common/autotest_common.sh@859 -- # break 00:10:43.235 00:30:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:43.235 00:30:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:43.235 00:30:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:43.235 1+0 records in 00:10:43.235 1+0 records out 00:10:43.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579644 s, 7.1 MB/s 00:10:43.235 00:30:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.235 00:30:16 -- common/autotest_common.sh@872 -- # size=4096 00:10:43.235 00:30:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:43.235 00:30:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:43.235 00:30:16 -- common/autotest_common.sh@875 -- # return 0 00:10:43.235 00:30:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:43.235 00:30:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:43.235 00:30:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:43.235 00:30:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.235 00:30:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:43.495 { 00:10:43.495 "nbd_device": "/dev/nbd0", 00:10:43.495 "bdev_name": "Malloc0" 00:10:43.495 }, 00:10:43.495 { 00:10:43.495 "nbd_device": "/dev/nbd1", 00:10:43.495 "bdev_name": "Malloc1" 00:10:43.495 } 00:10:43.495 ]' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:43.495 { 00:10:43.495 "nbd_device": "/dev/nbd0", 00:10:43.495 "bdev_name": "Malloc0" 00:10:43.495 }, 00:10:43.495 { 00:10:43.495 "nbd_device": "/dev/nbd1", 00:10:43.495 "bdev_name": "Malloc1" 00:10:43.495 } 00:10:43.495 ]' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:43.495 /dev/nbd1' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:43.495 /dev/nbd1' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@65 -- # count=2 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@95 -- # count=2 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:43.495 256+0 records in 00:10:43.495 256+0 records out 00:10:43.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00893376 s, 117 MB/s 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:43.495 256+0 records in 00:10:43.495 256+0 records out 00:10:43.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028317 s, 37.0 MB/s 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:43.495 256+0 records in 00:10:43.495 256+0 records out 00:10:43.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269321 s, 38.9 MB/s 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@51 -- # local i 00:10:43.495 00:30:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.496 00:30:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@41 -- # break 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@45 -- # return 0 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:43.754 00:30:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@41 -- # break 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@45 -- # return 0 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:44.013 00:30:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@65 -- # true 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@65 -- # count=0 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@104 -- # count=0 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:44.271 00:30:17 -- bdev/nbd_common.sh@109 -- # return 0 00:10:44.271 00:30:17 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:44.892 00:30:18 -- event/event.sh@35 -- # sleep 3 00:10:45.855 [2024-04-27 00:30:19.170901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.855 [2024-04-27 00:30:19.346187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.855 [2024-04-27 00:30:19.346195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.114 [2024-04-27 00:30:19.519995] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:46.114 [2024-04-27 00:30:19.520344] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:48.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:48.016 00:30:21 -- event/event.sh@38 -- # waitforlisten 112326 /var/tmp/spdk-nbd.sock 00:10:48.016 00:30:21 -- common/autotest_common.sh@817 -- # '[' -z 112326 ']' 00:10:48.016 00:30:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:48.016 00:30:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:48.016 00:30:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:48.016 00:30:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:48.016 00:30:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.016 00:30:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:48.016 00:30:21 -- common/autotest_common.sh@850 -- # return 0 00:10:48.016 00:30:21 -- event/event.sh@39 -- # killprocess 112326 00:10:48.016 00:30:21 -- common/autotest_common.sh@936 -- # '[' -z 112326 ']' 00:10:48.016 00:30:21 -- common/autotest_common.sh@940 -- # kill -0 112326 00:10:48.016 00:30:21 -- common/autotest_common.sh@941 -- # uname 00:10:48.016 00:30:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:48.016 00:30:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112326 00:10:48.016 killing process with pid 112326 00:10:48.016 00:30:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:48.016 00:30:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:48.016 00:30:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112326' 00:10:48.016 00:30:21 -- common/autotest_common.sh@955 -- # kill 112326 00:10:48.016 00:30:21 -- common/autotest_common.sh@960 -- # wait 112326 00:10:48.952 spdk_app_start is called in Round 0. 00:10:48.952 Shutdown signal received, stop current app iteration 00:10:48.952 Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 reinitialization... 00:10:48.952 spdk_app_start is called in Round 1. 00:10:48.952 Shutdown signal received, stop current app iteration 00:10:48.952 Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 reinitialization... 00:10:48.952 spdk_app_start is called in Round 2. 00:10:48.952 Shutdown signal received, stop current app iteration 00:10:48.952 Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 reinitialization... 00:10:48.952 spdk_app_start is called in Round 3. 00:10:48.952 Shutdown signal received, stop current app iteration 00:10:48.952 ************************************ 00:10:48.952 END TEST app_repeat 00:10:48.952 ************************************ 00:10:48.952 00:30:22 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:48.952 00:30:22 -- event/event.sh@42 -- # return 0 00:10:48.952 00:10:48.952 real 0m20.461s 00:10:48.952 user 0m44.431s 00:10:48.952 sys 0m2.778s 00:10:48.952 00:30:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:48.952 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:10:48.952 00:30:22 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:48.952 00:30:22 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:48.952 00:30:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.952 00:30:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.952 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:10:48.952 ************************************ 00:10:48.952 START TEST cpu_locks 00:10:48.952 ************************************ 00:10:48.952 00:30:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:49.211 * Looking for test storage... 00:10:49.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:49.211 00:30:22 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:49.211 00:30:22 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:49.211 00:30:22 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:49.211 00:30:22 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:49.211 00:30:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:49.211 00:30:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.211 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:10:49.211 ************************************ 00:10:49.211 START TEST default_locks 00:10:49.211 ************************************ 00:10:49.211 00:30:22 -- common/autotest_common.sh@1111 -- # default_locks 00:10:49.211 00:30:22 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=112862 00:10:49.211 00:30:22 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:49.211 00:30:22 -- event/cpu_locks.sh@47 -- # waitforlisten 112862 00:10:49.211 00:30:22 -- common/autotest_common.sh@817 -- # '[' -z 112862 ']' 00:10:49.211 00:30:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.211 00:30:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.211 00:30:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.211 00:30:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.211 00:30:22 -- common/autotest_common.sh@10 -- # set +x 00:10:49.211 [2024-04-27 00:30:22.689354] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:49.211 [2024-04-27 00:30:22.689591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112862 ] 00:10:49.469 [2024-04-27 00:30:22.856601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.469 [2024-04-27 00:30:23.041462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.404 00:30:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.404 00:30:23 -- common/autotest_common.sh@850 -- # return 0 00:10:50.404 00:30:23 -- event/cpu_locks.sh@49 -- # locks_exist 112862 00:10:50.404 00:30:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:50.404 00:30:23 -- event/cpu_locks.sh@22 -- # lslocks -p 112862 00:10:50.662 00:30:24 -- event/cpu_locks.sh@50 -- # killprocess 112862 00:10:50.662 00:30:24 -- common/autotest_common.sh@936 -- # '[' -z 112862 ']' 00:10:50.662 00:30:24 -- common/autotest_common.sh@940 -- # kill -0 112862 00:10:50.662 00:30:24 -- common/autotest_common.sh@941 -- # uname 00:10:50.662 00:30:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.662 00:30:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112862 00:10:50.662 killing process with pid 112862 00:10:50.662 00:30:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:50.662 00:30:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:50.662 00:30:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112862' 00:10:50.662 00:30:24 -- common/autotest_common.sh@955 -- # kill 112862 00:10:50.662 00:30:24 -- common/autotest_common.sh@960 -- # wait 112862 00:10:52.579 00:30:26 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 112862 00:10:52.579 00:30:26 -- common/autotest_common.sh@638 -- # local es=0 00:10:52.579 00:30:26 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 112862 00:10:52.579 00:30:26 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:10:52.579 00:30:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:52.579 00:30:26 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:10:52.579 00:30:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:52.579 00:30:26 -- common/autotest_common.sh@641 -- # waitforlisten 112862 00:10:52.579 00:30:26 -- common/autotest_common.sh@817 -- # '[' -z 112862 ']' 00:10:52.579 00:30:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.579 00:30:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:52.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.580 00:30:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.580 00:30:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:52.580 00:30:26 -- common/autotest_common.sh@10 -- # set +x 00:10:52.580 ERROR: process (pid: 112862) is no longer running 00:10:52.580 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (112862) - No such process 00:10:52.580 00:30:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:52.580 00:30:26 -- common/autotest_common.sh@850 -- # return 1 00:10:52.580 00:30:26 -- common/autotest_common.sh@641 -- # es=1 00:10:52.580 00:30:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:52.580 00:30:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:52.580 00:30:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:52.580 00:30:26 -- event/cpu_locks.sh@54 -- # no_locks 00:10:52.580 00:30:26 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:52.580 00:30:26 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:52.580 00:30:26 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:52.580 00:10:52.580 real 0m3.421s 00:10:52.580 user 0m3.474s 00:10:52.580 sys 0m0.599s 00:10:52.580 00:30:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:52.580 00:30:26 -- common/autotest_common.sh@10 -- # set +x 00:10:52.580 ************************************ 00:10:52.580 END TEST default_locks 00:10:52.580 ************************************ 00:10:52.580 00:30:26 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:52.580 00:30:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:52.580 00:30:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:52.580 00:30:26 -- common/autotest_common.sh@10 -- # set +x 00:10:52.580 ************************************ 00:10:52.580 START TEST default_locks_via_rpc 00:10:52.580 ************************************ 00:10:52.580 00:30:26 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:10:52.580 00:30:26 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:52.580 00:30:26 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=112946 00:10:52.580 00:30:26 -- event/cpu_locks.sh@63 -- # waitforlisten 112946 00:10:52.580 00:30:26 -- common/autotest_common.sh@817 -- # '[' -z 112946 ']' 00:10:52.580 00:30:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.580 00:30:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:52.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.580 00:30:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.580 00:30:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:52.580 00:30:26 -- common/autotest_common.sh@10 -- # set +x 00:10:52.837 [2024-04-27 00:30:26.185220] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:52.837 [2024-04-27 00:30:26.185395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112946 ] 00:10:52.837 [2024-04-27 00:30:26.335507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.095 [2024-04-27 00:30:26.537057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.662 00:30:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:53.662 00:30:27 -- common/autotest_common.sh@850 -- # return 0 00:10:53.662 00:30:27 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:53.662 00:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.662 00:30:27 -- common/autotest_common.sh@10 -- # set +x 00:10:53.662 00:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.662 00:30:27 -- event/cpu_locks.sh@67 -- # no_locks 00:10:53.662 00:30:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:53.662 00:30:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:53.662 00:30:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:53.662 00:30:27 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:53.662 00:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.921 00:30:27 -- common/autotest_common.sh@10 -- # set +x 00:10:53.921 00:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.921 00:30:27 -- event/cpu_locks.sh@71 -- # locks_exist 112946 00:10:53.921 00:30:27 -- event/cpu_locks.sh@22 -- # lslocks -p 112946 00:10:53.921 00:30:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:53.921 00:30:27 -- event/cpu_locks.sh@73 -- # killprocess 112946 00:10:53.921 00:30:27 -- common/autotest_common.sh@936 -- # '[' -z 112946 ']' 00:10:53.921 00:30:27 -- common/autotest_common.sh@940 -- # kill -0 112946 00:10:53.921 00:30:27 -- common/autotest_common.sh@941 -- # uname 00:10:53.921 00:30:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.921 00:30:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 112946 00:10:53.921 00:30:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:53.921 killing process with pid 112946 00:10:53.921 00:30:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:53.921 00:30:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 112946' 00:10:53.921 00:30:27 -- common/autotest_common.sh@955 -- # kill 112946 00:10:53.921 00:30:27 -- common/autotest_common.sh@960 -- # wait 112946 00:10:55.825 00:10:55.825 real 0m3.221s 00:10:55.825 user 0m3.185s 00:10:55.825 sys 0m0.617s 00:10:55.825 00:30:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.825 00:30:29 -- common/autotest_common.sh@10 -- # set +x 00:10:55.825 ************************************ 00:10:55.825 END TEST default_locks_via_rpc 00:10:55.825 ************************************ 00:10:55.825 00:30:29 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:55.825 00:30:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:55.825 00:30:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.825 00:30:29 -- common/autotest_common.sh@10 -- # set +x 00:10:56.084 ************************************ 00:10:56.084 START TEST non_locking_app_on_locked_coremask 00:10:56.084 ************************************ 00:10:56.084 00:30:29 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:10:56.084 00:30:29 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:56.084 00:30:29 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=113022 00:10:56.084 00:30:29 -- event/cpu_locks.sh@81 -- # waitforlisten 113022 /var/tmp/spdk.sock 00:10:56.084 00:30:29 -- common/autotest_common.sh@817 -- # '[' -z 113022 ']' 00:10:56.084 00:30:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.084 00:30:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:56.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.084 00:30:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.084 00:30:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:56.084 00:30:29 -- common/autotest_common.sh@10 -- # set +x 00:10:56.084 [2024-04-27 00:30:29.500868] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:56.084 [2024-04-27 00:30:29.501065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113022 ] 00:10:56.084 [2024-04-27 00:30:29.668976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.343 [2024-04-27 00:30:29.866512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.279 00:30:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:57.279 00:30:30 -- common/autotest_common.sh@850 -- # return 0 00:10:57.279 00:30:30 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=113050 00:10:57.279 00:30:30 -- event/cpu_locks.sh@85 -- # waitforlisten 113050 /var/tmp/spdk2.sock 00:10:57.279 00:30:30 -- common/autotest_common.sh@817 -- # '[' -z 113050 ']' 00:10:57.279 00:30:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:57.279 00:30:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:57.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:57.279 00:30:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:57.279 00:30:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:57.279 00:30:30 -- common/autotest_common.sh@10 -- # set +x 00:10:57.279 00:30:30 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:57.279 [2024-04-27 00:30:30.649850] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:10:57.279 [2024-04-27 00:30:30.650058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113050 ] 00:10:57.279 [2024-04-27 00:30:30.810294] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:57.279 [2024-04-27 00:30:30.826420] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.859 [2024-04-27 00:30:31.165814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.261 00:30:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:59.261 00:30:32 -- common/autotest_common.sh@850 -- # return 0 00:10:59.261 00:30:32 -- event/cpu_locks.sh@87 -- # locks_exist 113022 00:10:59.261 00:30:32 -- event/cpu_locks.sh@22 -- # lslocks -p 113022 00:10:59.261 00:30:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:59.537 00:30:32 -- event/cpu_locks.sh@89 -- # killprocess 113022 00:10:59.537 00:30:32 -- common/autotest_common.sh@936 -- # '[' -z 113022 ']' 00:10:59.537 00:30:32 -- common/autotest_common.sh@940 -- # kill -0 113022 00:10:59.537 00:30:32 -- common/autotest_common.sh@941 -- # uname 00:10:59.537 00:30:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:59.537 00:30:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113022 00:10:59.537 00:30:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:59.537 00:30:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:59.537 killing process with pid 113022 00:10:59.537 00:30:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113022' 00:10:59.537 00:30:33 -- common/autotest_common.sh@955 -- # kill 113022 00:10:59.537 00:30:33 -- common/autotest_common.sh@960 -- # wait 113022 00:11:03.763 00:30:36 -- event/cpu_locks.sh@90 -- # killprocess 113050 00:11:03.763 00:30:36 -- common/autotest_common.sh@936 -- # '[' -z 113050 ']' 00:11:03.763 00:30:36 -- common/autotest_common.sh@940 -- # kill -0 113050 00:11:03.763 00:30:36 -- common/autotest_common.sh@941 -- # uname 00:11:03.763 00:30:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:03.763 00:30:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113050 00:11:03.763 00:30:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:03.763 killing process with pid 113050 00:11:03.763 00:30:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:03.763 00:30:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113050' 00:11:03.763 00:30:36 -- common/autotest_common.sh@955 -- # kill 113050 00:11:03.763 00:30:36 -- common/autotest_common.sh@960 -- # wait 113050 00:11:05.141 00:11:05.141 real 0m9.264s 00:11:05.141 user 0m9.548s 00:11:05.141 sys 0m1.247s 00:11:05.141 00:30:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:05.141 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.141 ************************************ 00:11:05.141 END TEST non_locking_app_on_locked_coremask 00:11:05.141 ************************************ 00:11:05.400 00:30:38 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:05.400 00:30:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:05.400 00:30:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.400 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.400 ************************************ 00:11:05.400 START TEST locking_app_on_unlocked_coremask 00:11:05.400 ************************************ 00:11:05.400 00:30:38 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:11:05.400 00:30:38 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=113185 00:11:05.400 00:30:38 -- event/cpu_locks.sh@99 -- # waitforlisten 113185 /var/tmp/spdk.sock 00:11:05.400 00:30:38 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:05.400 00:30:38 -- common/autotest_common.sh@817 -- # '[' -z 113185 ']' 00:11:05.400 00:30:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.400 00:30:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:05.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.400 00:30:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.400 00:30:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:05.400 00:30:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.400 [2024-04-27 00:30:38.856101] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:05.400 [2024-04-27 00:30:38.856327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113185 ] 00:11:05.659 [2024-04-27 00:30:39.024404] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:05.659 [2024-04-27 00:30:39.024477] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.659 [2024-04-27 00:30:39.212572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.634 00:30:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:06.634 00:30:39 -- common/autotest_common.sh@850 -- # return 0 00:11:06.634 00:30:39 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=113206 00:11:06.634 00:30:39 -- event/cpu_locks.sh@103 -- # waitforlisten 113206 /var/tmp/spdk2.sock 00:11:06.634 00:30:39 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:06.634 00:30:39 -- common/autotest_common.sh@817 -- # '[' -z 113206 ']' 00:11:06.634 00:30:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:06.634 00:30:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:06.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:06.634 00:30:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:06.634 00:30:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:06.634 00:30:39 -- common/autotest_common.sh@10 -- # set +x 00:11:06.634 [2024-04-27 00:30:39.972221] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:06.634 [2024-04-27 00:30:39.972379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113206 ] 00:11:06.634 [2024-04-27 00:30:40.122817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.202 [2024-04-27 00:30:40.507781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.578 00:30:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:08.578 00:30:41 -- common/autotest_common.sh@850 -- # return 0 00:11:08.578 00:30:41 -- event/cpu_locks.sh@105 -- # locks_exist 113206 00:11:08.578 00:30:41 -- event/cpu_locks.sh@22 -- # lslocks -p 113206 00:11:08.578 00:30:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:08.838 00:30:42 -- event/cpu_locks.sh@107 -- # killprocess 113185 00:11:08.838 00:30:42 -- common/autotest_common.sh@936 -- # '[' -z 113185 ']' 00:11:08.838 00:30:42 -- common/autotest_common.sh@940 -- # kill -0 113185 00:11:08.838 00:30:42 -- common/autotest_common.sh@941 -- # uname 00:11:08.838 00:30:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:08.838 00:30:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113185 00:11:08.838 00:30:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:08.838 killing process with pid 113185 00:11:08.838 00:30:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:08.838 00:30:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113185' 00:11:08.838 00:30:42 -- common/autotest_common.sh@955 -- # kill 113185 00:11:08.838 00:30:42 -- common/autotest_common.sh@960 -- # wait 113185 00:11:13.027 00:30:46 -- event/cpu_locks.sh@108 -- # killprocess 113206 00:11:13.027 00:30:46 -- common/autotest_common.sh@936 -- # '[' -z 113206 ']' 00:11:13.027 00:30:46 -- common/autotest_common.sh@940 -- # kill -0 113206 00:11:13.027 00:30:46 -- common/autotest_common.sh@941 -- # uname 00:11:13.027 00:30:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:13.027 00:30:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113206 00:11:13.027 00:30:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:13.027 killing process with pid 113206 00:11:13.027 00:30:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:13.027 00:30:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113206' 00:11:13.027 00:30:46 -- common/autotest_common.sh@955 -- # kill 113206 00:11:13.027 00:30:46 -- common/autotest_common.sh@960 -- # wait 113206 00:11:14.405 00:11:14.405 real 0m9.124s 00:11:14.405 user 0m9.419s 00:11:14.405 sys 0m1.144s 00:11:14.405 ************************************ 00:11:14.405 END TEST locking_app_on_unlocked_coremask 00:11:14.405 ************************************ 00:11:14.405 00:30:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.405 00:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:14.405 00:30:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:14.405 00:30:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.405 00:30:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.405 00:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:14.664 ************************************ 00:11:14.665 START TEST locking_app_on_locked_coremask 00:11:14.665 ************************************ 00:11:14.665 00:30:47 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:11:14.665 00:30:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=113346 00:11:14.665 00:30:47 -- event/cpu_locks.sh@116 -- # waitforlisten 113346 /var/tmp/spdk.sock 00:11:14.665 00:30:47 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:14.665 00:30:47 -- common/autotest_common.sh@817 -- # '[' -z 113346 ']' 00:11:14.665 00:30:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.665 00:30:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:14.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.665 00:30:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.665 00:30:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:14.665 00:30:47 -- common/autotest_common.sh@10 -- # set +x 00:11:14.665 [2024-04-27 00:30:48.066116] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:14.665 [2024-04-27 00:30:48.066309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113346 ] 00:11:14.665 [2024-04-27 00:30:48.231928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.925 [2024-04-27 00:30:48.429714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.860 00:30:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:15.860 00:30:49 -- common/autotest_common.sh@850 -- # return 0 00:11:15.861 00:30:49 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=113367 00:11:15.861 00:30:49 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 113367 /var/tmp/spdk2.sock 00:11:15.861 00:30:49 -- common/autotest_common.sh@638 -- # local es=0 00:11:15.861 00:30:49 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113367 /var/tmp/spdk2.sock 00:11:15.861 00:30:49 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:15.861 00:30:49 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:15.861 00:30:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:15.861 00:30:49 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:15.861 00:30:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:15.861 00:30:49 -- common/autotest_common.sh@641 -- # waitforlisten 113367 /var/tmp/spdk2.sock 00:11:15.861 00:30:49 -- common/autotest_common.sh@817 -- # '[' -z 113367 ']' 00:11:15.861 00:30:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:15.861 00:30:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:15.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:15.861 00:30:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:15.861 00:30:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:15.861 00:30:49 -- common/autotest_common.sh@10 -- # set +x 00:11:15.861 [2024-04-27 00:30:49.200655] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:15.861 [2024-04-27 00:30:49.201140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113367 ] 00:11:15.861 [2024-04-27 00:30:49.378663] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 113346 has claimed it. 00:11:15.861 [2024-04-27 00:30:49.378789] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:16.427 ERROR: process (pid: 113367) is no longer running 00:11:16.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113367) - No such process 00:11:16.427 00:30:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:16.427 00:30:49 -- common/autotest_common.sh@850 -- # return 1 00:11:16.427 00:30:49 -- common/autotest_common.sh@641 -- # es=1 00:11:16.427 00:30:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:16.427 00:30:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:16.427 00:30:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:16.427 00:30:49 -- event/cpu_locks.sh@122 -- # locks_exist 113346 00:11:16.427 00:30:49 -- event/cpu_locks.sh@22 -- # lslocks -p 113346 00:11:16.427 00:30:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:16.685 00:30:50 -- event/cpu_locks.sh@124 -- # killprocess 113346 00:11:16.685 00:30:50 -- common/autotest_common.sh@936 -- # '[' -z 113346 ']' 00:11:16.685 00:30:50 -- common/autotest_common.sh@940 -- # kill -0 113346 00:11:16.685 00:30:50 -- common/autotest_common.sh@941 -- # uname 00:11:16.685 00:30:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:16.685 00:30:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113346 00:11:16.685 00:30:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:16.685 killing process with pid 113346 00:11:16.685 00:30:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:16.685 00:30:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113346' 00:11:16.685 00:30:50 -- common/autotest_common.sh@955 -- # kill 113346 00:11:16.685 00:30:50 -- common/autotest_common.sh@960 -- # wait 113346 00:11:19.221 ************************************ 00:11:19.221 END TEST locking_app_on_locked_coremask 00:11:19.221 ************************************ 00:11:19.221 00:11:19.221 real 0m4.251s 00:11:19.221 user 0m4.555s 00:11:19.221 sys 0m0.793s 00:11:19.221 00:30:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:19.221 00:30:52 -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 00:30:52 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:19.221 00:30:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:19.221 00:30:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:19.221 00:30:52 -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 ************************************ 00:11:19.221 START TEST locking_overlapped_coremask 00:11:19.221 ************************************ 00:11:19.221 00:30:52 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:11:19.221 00:30:52 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:19.221 00:30:52 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=113440 00:11:19.221 00:30:52 -- event/cpu_locks.sh@133 -- # waitforlisten 113440 /var/tmp/spdk.sock 00:11:19.221 00:30:52 -- common/autotest_common.sh@817 -- # '[' -z 113440 ']' 00:11:19.221 00:30:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.221 00:30:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:19.221 00:30:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.221 00:30:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:19.221 00:30:52 -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 [2024-04-27 00:30:52.402915] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:19.221 [2024-04-27 00:30:52.403368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113440 ] 00:11:19.221 [2024-04-27 00:30:52.581035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.221 [2024-04-27 00:30:52.783154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.221 [2024-04-27 00:30:52.783288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.221 [2024-04-27 00:30:52.783285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.168 00:30:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:20.168 00:30:53 -- common/autotest_common.sh@850 -- # return 0 00:11:20.168 00:30:53 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=113463 00:11:20.168 00:30:53 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:20.168 00:30:53 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 113463 /var/tmp/spdk2.sock 00:11:20.168 00:30:53 -- common/autotest_common.sh@638 -- # local es=0 00:11:20.168 00:30:53 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 113463 /var/tmp/spdk2.sock 00:11:20.168 00:30:53 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:11:20.168 00:30:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.168 00:30:53 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:11:20.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:20.168 00:30:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:20.168 00:30:53 -- common/autotest_common.sh@641 -- # waitforlisten 113463 /var/tmp/spdk2.sock 00:11:20.168 00:30:53 -- common/autotest_common.sh@817 -- # '[' -z 113463 ']' 00:11:20.168 00:30:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:20.168 00:30:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:20.168 00:30:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:20.168 00:30:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:20.168 00:30:53 -- common/autotest_common.sh@10 -- # set +x 00:11:20.168 [2024-04-27 00:30:53.629109] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:20.168 [2024-04-27 00:30:53.629583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113463 ] 00:11:20.426 [2024-04-27 00:30:53.816313] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113440 has claimed it. 00:11:20.426 [2024-04-27 00:30:53.816414] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:20.993 ERROR: process (pid: 113463) is no longer running 00:11:20.993 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (113463) - No such process 00:11:20.993 00:30:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:20.993 00:30:54 -- common/autotest_common.sh@850 -- # return 1 00:11:20.993 00:30:54 -- common/autotest_common.sh@641 -- # es=1 00:11:20.993 00:30:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:20.993 00:30:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:20.993 00:30:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:20.993 00:30:54 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:20.993 00:30:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:20.993 00:30:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:20.993 00:30:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:20.993 00:30:54 -- event/cpu_locks.sh@141 -- # killprocess 113440 00:11:20.993 00:30:54 -- common/autotest_common.sh@936 -- # '[' -z 113440 ']' 00:11:20.993 00:30:54 -- common/autotest_common.sh@940 -- # kill -0 113440 00:11:20.993 00:30:54 -- common/autotest_common.sh@941 -- # uname 00:11:20.993 00:30:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:20.993 00:30:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113440 00:11:20.993 00:30:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:20.993 00:30:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:20.993 00:30:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113440' 00:11:20.993 killing process with pid 113440 00:11:20.993 00:30:54 -- common/autotest_common.sh@955 -- # kill 113440 00:11:20.993 00:30:54 -- common/autotest_common.sh@960 -- # wait 113440 00:11:22.898 ************************************ 00:11:22.898 END TEST locking_overlapped_coremask 00:11:22.898 ************************************ 00:11:22.898 00:11:22.898 real 0m4.009s 00:11:22.898 user 0m10.533s 00:11:22.898 sys 0m0.625s 00:11:22.898 00:30:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.898 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:11:22.898 00:30:56 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:22.899 00:30:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.899 00:30:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.899 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:11:22.899 ************************************ 00:11:22.899 START TEST locking_overlapped_coremask_via_rpc 00:11:22.899 ************************************ 00:11:22.899 00:30:56 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:11:22.899 00:30:56 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=113539 00:11:22.899 00:30:56 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:22.899 00:30:56 -- event/cpu_locks.sh@149 -- # waitforlisten 113539 /var/tmp/spdk.sock 00:11:22.899 00:30:56 -- common/autotest_common.sh@817 -- # '[' -z 113539 ']' 00:11:22.899 00:30:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.899 00:30:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:22.899 00:30:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.899 00:30:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:22.899 00:30:56 -- common/autotest_common.sh@10 -- # set +x 00:11:23.158 [2024-04-27 00:30:56.493030] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:23.158 [2024-04-27 00:30:56.493422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113539 ] 00:11:23.158 [2024-04-27 00:30:56.670086] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:23.158 [2024-04-27 00:30:56.670444] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:23.416 [2024-04-27 00:30:56.865377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.416 [2024-04-27 00:30:56.865521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.416 [2024-04-27 00:30:56.865517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:24.364 00:30:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:24.364 00:30:57 -- common/autotest_common.sh@850 -- # return 0 00:11:24.364 00:30:57 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=113561 00:11:24.364 00:30:57 -- event/cpu_locks.sh@153 -- # waitforlisten 113561 /var/tmp/spdk2.sock 00:11:24.364 00:30:57 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:24.364 00:30:57 -- common/autotest_common.sh@817 -- # '[' -z 113561 ']' 00:11:24.364 00:30:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:24.364 00:30:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:24.364 00:30:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:24.364 00:30:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:24.364 00:30:57 -- common/autotest_common.sh@10 -- # set +x 00:11:24.364 [2024-04-27 00:30:57.701503] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:24.364 [2024-04-27 00:30:57.701973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113561 ] 00:11:24.364 [2024-04-27 00:30:57.909974] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:24.364 [2024-04-27 00:30:57.910119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.952 [2024-04-27 00:30:58.374452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.952 [2024-04-27 00:30:58.374565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.952 [2024-04-27 00:30:58.374568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:26.854 00:31:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:26.854 00:31:00 -- common/autotest_common.sh@850 -- # return 0 00:11:26.854 00:31:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:26.854 00:31:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.854 00:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:26.854 00:31:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.854 00:31:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:26.854 00:31:00 -- common/autotest_common.sh@638 -- # local es=0 00:11:26.854 00:31:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:26.854 00:31:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:11:26.854 00:31:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:26.854 00:31:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:11:26.854 00:31:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:26.855 00:31:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:26.855 00:31:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.855 00:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:26.855 [2024-04-27 00:31:00.342524] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113539 has claimed it. 00:11:26.855 request: 00:11:26.855 { 00:11:26.855 "method": "framework_enable_cpumask_locks", 00:11:26.855 "req_id": 1 00:11:26.855 } 00:11:26.855 Got JSON-RPC error response 00:11:26.855 response: 00:11:26.855 { 00:11:26.855 "code": -32603, 00:11:26.855 "message": "Failed to claim CPU core: 2" 00:11:26.855 } 00:11:26.855 00:31:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:26.855 00:31:00 -- common/autotest_common.sh@641 -- # es=1 00:11:26.855 00:31:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:26.855 00:31:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:26.855 00:31:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:26.855 00:31:00 -- event/cpu_locks.sh@158 -- # waitforlisten 113539 /var/tmp/spdk.sock 00:11:26.855 00:31:00 -- common/autotest_common.sh@817 -- # '[' -z 113539 ']' 00:11:26.855 00:31:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.855 00:31:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:26.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.855 00:31:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.855 00:31:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:26.855 00:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:27.113 00:31:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:27.113 00:31:00 -- common/autotest_common.sh@850 -- # return 0 00:11:27.113 00:31:00 -- event/cpu_locks.sh@159 -- # waitforlisten 113561 /var/tmp/spdk2.sock 00:11:27.113 00:31:00 -- common/autotest_common.sh@817 -- # '[' -z 113561 ']' 00:11:27.113 00:31:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:27.113 00:31:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:27.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:27.113 00:31:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:27.113 00:31:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:27.113 00:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:27.371 00:31:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:27.371 00:31:00 -- common/autotest_common.sh@850 -- # return 0 00:11:27.371 00:31:00 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:27.371 00:31:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:27.371 00:31:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:27.371 ************************************ 00:11:27.371 END TEST locking_overlapped_coremask_via_rpc 00:11:27.371 00:31:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:27.371 00:11:27.371 real 0m4.423s 00:11:27.371 user 0m1.459s 00:11:27.371 sys 0m0.191s 00:11:27.371 00:31:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.371 00:31:00 -- common/autotest_common.sh@10 -- # set +x 00:11:27.371 ************************************ 00:11:27.371 00:31:00 -- event/cpu_locks.sh@174 -- # cleanup 00:11:27.371 00:31:00 -- event/cpu_locks.sh@15 -- # [[ -z 113539 ]] 00:11:27.371 00:31:00 -- event/cpu_locks.sh@15 -- # killprocess 113539 00:11:27.371 00:31:00 -- common/autotest_common.sh@936 -- # '[' -z 113539 ']' 00:11:27.371 00:31:00 -- common/autotest_common.sh@940 -- # kill -0 113539 00:11:27.371 00:31:00 -- common/autotest_common.sh@941 -- # uname 00:11:27.371 00:31:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:27.371 00:31:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113539 00:11:27.371 killing process with pid 113539 00:11:27.371 00:31:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:27.371 00:31:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:27.371 00:31:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113539' 00:11:27.371 00:31:00 -- common/autotest_common.sh@955 -- # kill 113539 00:11:27.371 00:31:00 -- common/autotest_common.sh@960 -- # wait 113539 00:11:29.901 00:31:02 -- event/cpu_locks.sh@16 -- # [[ -z 113561 ]] 00:11:29.901 00:31:02 -- event/cpu_locks.sh@16 -- # killprocess 113561 00:11:29.901 00:31:02 -- common/autotest_common.sh@936 -- # '[' -z 113561 ']' 00:11:29.901 00:31:02 -- common/autotest_common.sh@940 -- # kill -0 113561 00:11:29.901 00:31:02 -- common/autotest_common.sh@941 -- # uname 00:11:29.901 00:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:29.901 00:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113561 00:11:29.901 killing process with pid 113561 00:11:29.901 00:31:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:29.901 00:31:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:29.901 00:31:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113561' 00:11:29.901 00:31:03 -- common/autotest_common.sh@955 -- # kill 113561 00:11:29.901 00:31:03 -- common/autotest_common.sh@960 -- # wait 113561 00:11:31.805 00:31:05 -- event/cpu_locks.sh@18 -- # rm -f 00:11:31.805 00:31:05 -- event/cpu_locks.sh@1 -- # cleanup 00:11:31.805 00:31:05 -- event/cpu_locks.sh@15 -- # [[ -z 113539 ]] 00:11:31.805 00:31:05 -- event/cpu_locks.sh@15 -- # killprocess 113539 00:11:31.805 00:31:05 -- common/autotest_common.sh@936 -- # '[' -z 113539 ']' 00:11:31.805 00:31:05 -- common/autotest_common.sh@940 -- # kill -0 113539 00:11:31.805 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (113539) - No such process 00:11:31.805 Process with pid 113539 is not found 00:11:31.805 Process with pid 113561 is not found 00:11:31.805 00:31:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 113539 is not found' 00:11:31.805 00:31:05 -- event/cpu_locks.sh@16 -- # [[ -z 113561 ]] 00:11:31.805 00:31:05 -- event/cpu_locks.sh@16 -- # killprocess 113561 00:11:31.805 00:31:05 -- common/autotest_common.sh@936 -- # '[' -z 113561 ']' 00:11:31.805 00:31:05 -- common/autotest_common.sh@940 -- # kill -0 113561 00:11:31.805 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (113561) - No such process 00:11:31.805 00:31:05 -- common/autotest_common.sh@963 -- # echo 'Process with pid 113561 is not found' 00:11:31.805 00:31:05 -- event/cpu_locks.sh@18 -- # rm -f 00:11:31.805 ************************************ 00:11:31.805 END TEST cpu_locks 00:11:31.805 ************************************ 00:11:31.805 00:11:31.805 real 0m42.557s 00:11:31.805 user 1m14.726s 00:11:31.805 sys 0m6.334s 00:11:31.805 00:31:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.805 00:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:31.805 00:11:31.805 real 1m14.009s 00:11:31.805 user 2m15.471s 00:11:31.805 sys 0m10.281s 00:11:31.805 00:31:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.805 ************************************ 00:11:31.805 END TEST event 00:11:31.805 ************************************ 00:11:31.805 00:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:31.805 00:31:05 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:31.805 00:31:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:31.805 00:31:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.805 00:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:31.805 ************************************ 00:11:31.805 START TEST thread 00:11:31.805 ************************************ 00:11:31.805 00:31:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:31.805 * Looking for test storage... 00:11:31.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:31.805 00:31:05 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:31.805 00:31:05 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:31.805 00:31:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.805 00:31:05 -- common/autotest_common.sh@10 -- # set +x 00:11:31.805 ************************************ 00:11:31.805 START TEST thread_poller_perf 00:11:31.805 ************************************ 00:11:31.805 00:31:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:31.805 [2024-04-27 00:31:05.317182] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:31.805 [2024-04-27 00:31:05.317525] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113768 ] 00:11:32.063 [2024-04-27 00:31:05.485157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.322 [2024-04-27 00:31:05.725708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.322 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:33.700 ====================================== 00:11:33.700 busy:2213645146 (cyc) 00:11:33.700 total_run_count: 350000 00:11:33.700 tsc_hz: 2200000000 (cyc) 00:11:33.700 ====================================== 00:11:33.700 poller_cost: 6324 (cyc), 2874 (nsec) 00:11:33.700 00:11:33.700 real 0m1.805s 00:11:33.700 user 0m1.596s 00:11:33.700 sys 0m0.108s 00:11:33.700 00:31:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:33.700 00:31:07 -- common/autotest_common.sh@10 -- # set +x 00:11:33.700 ************************************ 00:11:33.700 END TEST thread_poller_perf 00:11:33.700 ************************************ 00:11:33.700 00:31:07 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:33.700 00:31:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:11:33.700 00:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:33.700 00:31:07 -- common/autotest_common.sh@10 -- # set +x 00:11:33.700 ************************************ 00:11:33.700 START TEST thread_poller_perf 00:11:33.700 ************************************ 00:11:33.700 00:31:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:33.700 [2024-04-27 00:31:07.210778] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:33.700 [2024-04-27 00:31:07.211298] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113820 ] 00:11:33.960 [2024-04-27 00:31:07.381422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.219 [2024-04-27 00:31:07.566916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.219 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:35.595 ====================================== 00:11:35.595 busy:2203765296 (cyc) 00:11:35.595 total_run_count: 4353000 00:11:35.595 tsc_hz: 2200000000 (cyc) 00:11:35.595 ====================================== 00:11:35.595 poller_cost: 506 (cyc), 230 (nsec) 00:11:35.595 00:11:35.595 real 0m1.739s 00:11:35.595 user 0m1.521s 00:11:35.595 sys 0m0.116s 00:11:35.595 00:31:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.595 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:11:35.595 ************************************ 00:11:35.595 END TEST thread_poller_perf 00:11:35.595 ************************************ 00:11:35.595 00:31:08 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:35.595 00:31:08 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:35.595 00:31:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:35.595 00:31:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.595 00:31:08 -- common/autotest_common.sh@10 -- # set +x 00:11:35.595 ************************************ 00:11:35.595 START TEST thread_spdk_lock 00:11:35.595 ************************************ 00:11:35.595 00:31:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:35.595 [2024-04-27 00:31:09.049718] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:35.595 [2024-04-27 00:31:09.050108] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113864 ] 00:11:35.853 [2024-04-27 00:31:09.221637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:35.853 [2024-04-27 00:31:09.425295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.853 [2024-04-27 00:31:09.425304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.422 [2024-04-27 00:31:09.953655] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:36.422 [2024-04-27 00:31:09.953978] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:36.422 [2024-04-27 00:31:09.954152] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x55d427f5df00 00:11:36.422 [2024-04-27 00:31:09.961588] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:36.422 [2024-04-27 00:31:09.961807] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:36.422 [2024-04-27 00:31:09.961979] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:36.990 Starting test contend 00:11:36.990 Worker Delay Wait us Hold us Total us 00:11:36.990 0 3 134969 197771 332741 00:11:36.990 1 5 65581 300349 365931 00:11:36.990 PASS test contend 00:11:36.990 Starting test hold_by_poller 00:11:36.990 PASS test hold_by_poller 00:11:36.990 Starting test hold_by_message 00:11:36.990 PASS test hold_by_message 00:11:36.990 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:36.990 100014 assertions passed 00:11:36.990 0 assertions failed 00:11:36.990 ************************************ 00:11:36.990 END TEST thread_spdk_lock 00:11:36.990 ************************************ 00:11:36.990 00:11:36.990 real 0m1.313s 00:11:36.990 user 0m1.645s 00:11:36.990 sys 0m0.104s 00:11:36.990 00:31:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:36.990 00:31:10 -- common/autotest_common.sh@10 -- # set +x 00:11:36.990 ************************************ 00:11:36.990 END TEST thread 00:11:36.990 ************************************ 00:11:36.990 00:11:36.990 real 0m5.206s 00:11:36.990 user 0m4.917s 00:11:36.990 sys 0m0.507s 00:11:36.990 00:31:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:36.990 00:31:10 -- common/autotest_common.sh@10 -- # set +x 00:11:36.990 00:31:10 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:36.990 00:31:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:36.990 00:31:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:36.990 00:31:10 -- common/autotest_common.sh@10 -- # set +x 00:11:36.990 ************************************ 00:11:36.990 START TEST accel 00:11:36.990 ************************************ 00:11:36.990 00:31:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:36.990 * Looking for test storage... 00:11:36.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:36.990 00:31:10 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:36.990 00:31:10 -- accel/accel.sh@82 -- # get_expected_opcs 00:11:36.990 00:31:10 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:36.990 00:31:10 -- accel/accel.sh@62 -- # spdk_tgt_pid=113956 00:11:36.990 00:31:10 -- accel/accel.sh@63 -- # waitforlisten 113956 00:11:36.990 00:31:10 -- common/autotest_common.sh@817 -- # '[' -z 113956 ']' 00:11:36.990 00:31:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.990 00:31:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:36.990 00:31:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.990 00:31:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:36.990 00:31:10 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:36.990 00:31:10 -- accel/accel.sh@61 -- # build_accel_config 00:11:36.990 00:31:10 -- common/autotest_common.sh@10 -- # set +x 00:11:36.990 00:31:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:36.990 00:31:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:36.990 00:31:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.990 00:31:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.990 00:31:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:36.990 00:31:10 -- accel/accel.sh@40 -- # local IFS=, 00:11:36.990 00:31:10 -- accel/accel.sh@41 -- # jq -r . 00:11:37.249 [2024-04-27 00:31:10.599327] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:37.249 [2024-04-27 00:31:10.599534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113956 ] 00:11:37.249 [2024-04-27 00:31:10.768023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.508 [2024-04-27 00:31:10.960370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.443 00:31:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:38.443 00:31:11 -- common/autotest_common.sh@850 -- # return 0 00:11:38.443 00:31:11 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:38.443 00:31:11 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:38.443 00:31:11 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:38.443 00:31:11 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:11:38.443 00:31:11 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:38.443 00:31:11 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:38.444 00:31:11 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:38.444 00:31:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.444 00:31:11 -- common/autotest_common.sh@10 -- # set +x 00:11:38.444 00:31:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # IFS== 00:11:38.444 00:31:11 -- accel/accel.sh@72 -- # read -r opc module 00:11:38.444 00:31:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:38.444 00:31:11 -- accel/accel.sh@75 -- # killprocess 113956 00:11:38.444 00:31:11 -- common/autotest_common.sh@936 -- # '[' -z 113956 ']' 00:11:38.444 00:31:11 -- common/autotest_common.sh@940 -- # kill -0 113956 00:11:38.444 00:31:11 -- common/autotest_common.sh@941 -- # uname 00:11:38.444 00:31:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:38.444 00:31:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 113956 00:11:38.444 00:31:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:38.444 killing process with pid 113956 00:11:38.444 00:31:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:38.444 00:31:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 113956' 00:11:38.444 00:31:11 -- common/autotest_common.sh@955 -- # kill 113956 00:11:38.444 00:31:11 -- common/autotest_common.sh@960 -- # wait 113956 00:11:40.346 00:31:13 -- accel/accel.sh@76 -- # trap - ERR 00:11:40.346 00:31:13 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:11:40.346 00:31:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:40.346 00:31:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:40.346 00:31:13 -- common/autotest_common.sh@10 -- # set +x 00:11:40.605 00:31:13 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:11:40.605 00:31:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:40.605 00:31:13 -- accel/accel.sh@12 -- # build_accel_config 00:11:40.605 00:31:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:40.605 00:31:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:40.605 00:31:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:40.605 00:31:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:40.605 00:31:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:40.605 00:31:13 -- accel/accel.sh@40 -- # local IFS=, 00:11:40.605 00:31:13 -- accel/accel.sh@41 -- # jq -r . 00:11:40.605 00:31:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:40.605 00:31:14 -- common/autotest_common.sh@10 -- # set +x 00:11:40.605 00:31:14 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:40.605 00:31:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:40.605 00:31:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:40.605 00:31:14 -- common/autotest_common.sh@10 -- # set +x 00:11:40.605 ************************************ 00:11:40.605 START TEST accel_missing_filename 00:11:40.605 ************************************ 00:11:40.605 00:31:14 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:11:40.605 00:31:14 -- common/autotest_common.sh@638 -- # local es=0 00:11:40.605 00:31:14 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:40.605 00:31:14 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:11:40.605 00:31:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.605 00:31:14 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:11:40.605 00:31:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:40.605 00:31:14 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:11:40.605 00:31:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:40.605 00:31:14 -- accel/accel.sh@12 -- # build_accel_config 00:11:40.605 00:31:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:40.605 00:31:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:40.605 00:31:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:40.605 00:31:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:40.605 00:31:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:40.605 00:31:14 -- accel/accel.sh@40 -- # local IFS=, 00:11:40.605 00:31:14 -- accel/accel.sh@41 -- # jq -r . 00:11:40.605 [2024-04-27 00:31:14.151701] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:40.605 [2024-04-27 00:31:14.151902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114053 ] 00:11:40.864 [2024-04-27 00:31:14.322826] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.124 [2024-04-27 00:31:14.523432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.383 [2024-04-27 00:31:14.712166] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:41.642 [2024-04-27 00:31:15.151332] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:11:42.210 A filename is required. 00:11:42.210 00:31:15 -- common/autotest_common.sh@641 -- # es=234 00:11:42.210 00:31:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:42.210 00:31:15 -- common/autotest_common.sh@650 -- # es=106 00:11:42.210 00:31:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:42.210 00:31:15 -- common/autotest_common.sh@658 -- # es=1 00:11:42.210 00:31:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:42.210 00:11:42.210 real 0m1.404s 00:11:42.210 user 0m1.150s 00:11:42.210 sys 0m0.207s 00:11:42.210 00:31:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:42.210 00:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:42.210 ************************************ 00:11:42.210 END TEST accel_missing_filename 00:11:42.210 ************************************ 00:11:42.210 00:31:15 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.210 00:31:15 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:11:42.210 00:31:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.210 00:31:15 -- common/autotest_common.sh@10 -- # set +x 00:11:42.210 ************************************ 00:11:42.210 START TEST accel_compress_verify 00:11:42.210 ************************************ 00:11:42.210 00:31:15 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.210 00:31:15 -- common/autotest_common.sh@638 -- # local es=0 00:11:42.210 00:31:15 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.210 00:31:15 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:11:42.210 00:31:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.210 00:31:15 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:11:42.210 00:31:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.210 00:31:15 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.210 00:31:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:42.210 00:31:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:42.210 00:31:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:42.210 00:31:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:42.210 00:31:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:42.210 00:31:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:42.210 00:31:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:42.210 00:31:15 -- accel/accel.sh@40 -- # local IFS=, 00:11:42.210 00:31:15 -- accel/accel.sh@41 -- # jq -r . 00:11:42.210 [2024-04-27 00:31:15.636719] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:42.210 [2024-04-27 00:31:15.637322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114105 ] 00:11:42.531 [2024-04-27 00:31:15.805417] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.531 [2024-04-27 00:31:15.991905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.789 [2024-04-27 00:31:16.187304] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:43.047 [2024-04-27 00:31:16.629025] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:11:43.613 00:11:43.613 Compression does not support the verify option, aborting. 00:11:43.613 00:31:16 -- common/autotest_common.sh@641 -- # es=161 00:11:43.613 00:31:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.613 00:31:16 -- common/autotest_common.sh@650 -- # es=33 00:11:43.613 00:31:16 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:43.613 00:31:16 -- common/autotest_common.sh@658 -- # es=1 00:11:43.613 00:31:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.613 00:11:43.613 real 0m1.396s 00:11:43.613 user 0m1.157s 00:11:43.613 sys 0m0.190s 00:11:43.613 00:31:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:43.613 00:31:16 -- common/autotest_common.sh@10 -- # set +x 00:11:43.613 ************************************ 00:11:43.613 END TEST accel_compress_verify 00:11:43.613 ************************************ 00:11:43.613 00:31:17 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:43.613 00:31:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:43.613 00:31:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.613 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.613 ************************************ 00:11:43.613 START TEST accel_wrong_workload 00:11:43.613 ************************************ 00:11:43.613 00:31:17 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:11:43.613 00:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.613 00:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:43.613 00:31:17 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:11:43.613 00:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.613 00:31:17 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:11:43.613 00:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.613 00:31:17 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:11:43.613 00:31:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:43.613 00:31:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.613 00:31:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:43.613 00:31:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:43.613 00:31:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.613 00:31:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.613 00:31:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:43.613 00:31:17 -- accel/accel.sh@40 -- # local IFS=, 00:11:43.613 00:31:17 -- accel/accel.sh@41 -- # jq -r . 00:11:43.613 Unsupported workload type: foobar 00:11:43.613 [2024-04-27 00:31:17.120828] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:43.613 accel_perf options: 00:11:43.613 [-h help message] 00:11:43.613 [-q queue depth per core] 00:11:43.613 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:43.613 [-T number of threads per core 00:11:43.613 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:43.613 [-t time in seconds] 00:11:43.613 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:43.613 [ dif_verify, , dif_generate, dif_generate_copy 00:11:43.613 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:43.613 [-l for compress/decompress workloads, name of uncompressed input file 00:11:43.613 [-S for crc32c workload, use this seed value (default 0) 00:11:43.613 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:43.613 [-f for fill workload, use this BYTE value (default 255) 00:11:43.613 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:43.613 [-y verify result if this switch is on] 00:11:43.613 [-a tasks to allocate per core (default: same value as -q)] 00:11:43.613 Can be used to spread operations across a wider range of memory. 00:11:43.613 00:31:17 -- common/autotest_common.sh@641 -- # es=1 00:11:43.613 00:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.613 00:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:43.613 00:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.613 00:11:43.613 real 0m0.068s 00:11:43.613 user 0m0.107s 00:11:43.613 sys 0m0.023s 00:11:43.613 00:31:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:43.613 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.613 ************************************ 00:11:43.613 END TEST accel_wrong_workload 00:11:43.613 ************************************ 00:11:43.613 00:31:17 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:43.613 00:31:17 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:11:43.613 00:31:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.613 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.878 ************************************ 00:11:43.878 START TEST accel_negative_buffers 00:11:43.878 ************************************ 00:11:43.878 00:31:17 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:43.879 00:31:17 -- common/autotest_common.sh@638 -- # local es=0 00:11:43.879 00:31:17 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:43.879 00:31:17 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:11:43.879 00:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.879 00:31:17 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:11:43.879 00:31:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:43.879 00:31:17 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:11:43.879 00:31:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:43.879 00:31:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.879 00:31:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:43.879 00:31:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:43.879 00:31:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.879 00:31:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.879 00:31:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:43.879 00:31:17 -- accel/accel.sh@40 -- # local IFS=, 00:11:43.879 00:31:17 -- accel/accel.sh@41 -- # jq -r . 00:11:43.879 -x option must be non-negative. 00:11:43.879 [2024-04-27 00:31:17.268088] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:43.879 accel_perf options: 00:11:43.879 [-h help message] 00:11:43.879 [-q queue depth per core] 00:11:43.879 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:43.879 [-T number of threads per core 00:11:43.879 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:43.879 [-t time in seconds] 00:11:43.879 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:43.879 [ dif_verify, , dif_generate, dif_generate_copy 00:11:43.879 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:43.879 [-l for compress/decompress workloads, name of uncompressed input file 00:11:43.879 [-S for crc32c workload, use this seed value (default 0) 00:11:43.879 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:43.879 [-f for fill workload, use this BYTE value (default 255) 00:11:43.879 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:43.879 [-y verify result if this switch is on] 00:11:43.879 [-a tasks to allocate per core (default: same value as -q)] 00:11:43.879 Can be used to spread operations across a wider range of memory. 00:11:43.879 00:31:17 -- common/autotest_common.sh@641 -- # es=1 00:11:43.879 00:31:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:43.879 00:31:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:43.879 00:31:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:43.879 00:11:43.879 real 0m0.070s 00:11:43.879 user 0m0.079s 00:11:43.879 sys 0m0.042s 00:11:43.879 00:31:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:43.879 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.879 ************************************ 00:11:43.879 END TEST accel_negative_buffers 00:11:43.879 ************************************ 00:11:43.879 00:31:17 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:43.879 00:31:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:43.879 00:31:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.879 00:31:17 -- common/autotest_common.sh@10 -- # set +x 00:11:43.879 ************************************ 00:11:43.879 START TEST accel_crc32c 00:11:43.879 ************************************ 00:11:43.879 00:31:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:43.879 00:31:17 -- accel/accel.sh@16 -- # local accel_opc 00:11:43.879 00:31:17 -- accel/accel.sh@17 -- # local accel_module 00:11:43.879 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:43.879 00:31:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:43.879 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:43.879 00:31:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:43.879 00:31:17 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.879 00:31:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:43.879 00:31:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:43.879 00:31:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.879 00:31:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.879 00:31:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:43.879 00:31:17 -- accel/accel.sh@40 -- # local IFS=, 00:11:43.879 00:31:17 -- accel/accel.sh@41 -- # jq -r . 00:11:43.879 [2024-04-27 00:31:17.412578] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:43.879 [2024-04-27 00:31:17.412734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114218 ] 00:11:44.136 [2024-04-27 00:31:17.566596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.395 [2024-04-27 00:31:17.771765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val= 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val= 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=0x1 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val= 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val= 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=crc32c 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=32 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val= 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=software 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@22 -- # accel_module=software 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=32 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=32 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=1 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val=Yes 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val= 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:44.395 00:31:17 -- accel/accel.sh@20 -- # val= 00:11:44.395 00:31:17 -- accel/accel.sh@21 -- # case "$var" in 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # IFS=: 00:11:44.395 00:31:17 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@20 -- # val= 00:11:46.297 00:31:19 -- accel/accel.sh@21 -- # case "$var" in 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # IFS=: 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@20 -- # val= 00:11:46.297 00:31:19 -- accel/accel.sh@21 -- # case "$var" in 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # IFS=: 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@20 -- # val= 00:11:46.297 00:31:19 -- accel/accel.sh@21 -- # case "$var" in 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # IFS=: 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@20 -- # val= 00:11:46.297 00:31:19 -- accel/accel.sh@21 -- # case "$var" in 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # IFS=: 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@20 -- # val= 00:11:46.297 00:31:19 -- accel/accel.sh@21 -- # case "$var" in 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # IFS=: 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@20 -- # val= 00:11:46.297 00:31:19 -- accel/accel.sh@21 -- # case "$var" in 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # IFS=: 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:46.297 00:31:19 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:46.297 00:31:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:46.297 00:11:46.297 real 0m2.382s 00:11:46.297 user 0m2.129s 00:11:46.297 sys 0m0.173s 00:11:46.297 00:31:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:46.297 ************************************ 00:11:46.297 END TEST accel_crc32c 00:11:46.297 ************************************ 00:11:46.297 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:11:46.297 00:31:19 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:11:46.297 00:31:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:46.297 00:31:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.297 00:31:19 -- common/autotest_common.sh@10 -- # set +x 00:11:46.297 ************************************ 00:11:46.297 START TEST accel_crc32c_C2 00:11:46.297 ************************************ 00:11:46.297 00:31:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:11:46.297 00:31:19 -- accel/accel.sh@16 -- # local accel_opc 00:11:46.297 00:31:19 -- accel/accel.sh@17 -- # local accel_module 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # IFS=: 00:11:46.297 00:31:19 -- accel/accel.sh@19 -- # read -r var val 00:11:46.297 00:31:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:46.297 00:31:19 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.297 00:31:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:46.297 00:31:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:46.297 00:31:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:46.297 00:31:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.297 00:31:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.297 00:31:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:46.297 00:31:19 -- accel/accel.sh@40 -- # local IFS=, 00:11:46.297 00:31:19 -- accel/accel.sh@41 -- # jq -r . 00:11:46.555 [2024-04-27 00:31:19.893671] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:46.556 [2024-04-27 00:31:19.893877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114272 ] 00:11:46.556 [2024-04-27 00:31:20.061107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.814 [2024-04-27 00:31:20.262967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val= 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val= 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=0x1 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val= 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val= 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=crc32c 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=0 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val= 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=software 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@22 -- # accel_module=software 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=32 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=32 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=1 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val=Yes 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val= 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:47.074 00:31:20 -- accel/accel.sh@20 -- # val= 00:11:47.074 00:31:20 -- accel/accel.sh@21 -- # case "$var" in 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # IFS=: 00:11:47.074 00:31:20 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:48.979 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:48.979 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:48.979 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:48.979 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:48.979 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:48.979 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:48.979 00:31:22 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:48.979 00:31:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:48.979 00:11:48.979 real 0m2.377s 00:11:48.979 user 0m2.080s 00:11:48.979 sys 0m0.209s 00:11:48.979 00:31:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.979 ************************************ 00:11:48.979 END TEST accel_crc32c_C2 00:11:48.979 ************************************ 00:11:48.979 00:31:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.979 00:31:22 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:11:48.979 00:31:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:48.979 00:31:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.979 00:31:22 -- common/autotest_common.sh@10 -- # set +x 00:11:48.979 ************************************ 00:11:48.979 START TEST accel_copy 00:11:48.979 ************************************ 00:11:48.979 00:31:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:11:48.979 00:31:22 -- accel/accel.sh@16 -- # local accel_opc 00:11:48.979 00:31:22 -- accel/accel.sh@17 -- # local accel_module 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:48.979 00:31:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:11:48.979 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:48.979 00:31:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:48.979 00:31:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.979 00:31:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:48.979 00:31:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:48.979 00:31:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.979 00:31:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.979 00:31:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:48.979 00:31:22 -- accel/accel.sh@40 -- # local IFS=, 00:11:48.979 00:31:22 -- accel/accel.sh@41 -- # jq -r . 00:11:48.979 [2024-04-27 00:31:22.359187] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:48.979 [2024-04-27 00:31:22.359391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114327 ] 00:11:48.979 [2024-04-27 00:31:22.523247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.238 [2024-04-27 00:31:22.745152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val=0x1 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val=copy 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@23 -- # accel_opc=copy 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val=software 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@22 -- # accel_module=software 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val=32 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.500 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.500 00:31:22 -- accel/accel.sh@20 -- # val=32 00:11:49.500 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.501 00:31:22 -- accel/accel.sh@20 -- # val=1 00:11:49.501 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.501 00:31:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:49.501 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.501 00:31:22 -- accel/accel.sh@20 -- # val=Yes 00:11:49.501 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.501 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:49.501 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:49.501 00:31:22 -- accel/accel.sh@20 -- # val= 00:11:49.501 00:31:22 -- accel/accel.sh@21 -- # case "$var" in 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # IFS=: 00:11:49.501 00:31:22 -- accel/accel.sh@19 -- # read -r var val 00:11:51.400 00:31:24 -- accel/accel.sh@20 -- # val= 00:11:51.400 00:31:24 -- accel/accel.sh@21 -- # case "$var" in 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # IFS=: 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # read -r var val 00:11:51.400 00:31:24 -- accel/accel.sh@20 -- # val= 00:11:51.400 00:31:24 -- accel/accel.sh@21 -- # case "$var" in 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # IFS=: 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # read -r var val 00:11:51.400 00:31:24 -- accel/accel.sh@20 -- # val= 00:11:51.400 00:31:24 -- accel/accel.sh@21 -- # case "$var" in 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # IFS=: 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # read -r var val 00:11:51.400 00:31:24 -- accel/accel.sh@20 -- # val= 00:11:51.400 00:31:24 -- accel/accel.sh@21 -- # case "$var" in 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # IFS=: 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # read -r var val 00:11:51.400 00:31:24 -- accel/accel.sh@20 -- # val= 00:11:51.400 00:31:24 -- accel/accel.sh@21 -- # case "$var" in 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # IFS=: 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # read -r var val 00:11:51.400 00:31:24 -- accel/accel.sh@20 -- # val= 00:11:51.400 00:31:24 -- accel/accel.sh@21 -- # case "$var" in 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # IFS=: 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # read -r var val 00:11:51.400 00:31:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:51.400 00:31:24 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:11:51.400 00:31:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:51.400 00:11:51.400 real 0m2.506s 00:11:51.400 user 0m2.224s 00:11:51.400 sys 0m0.207s 00:11:51.400 00:31:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:51.400 00:31:24 -- common/autotest_common.sh@10 -- # set +x 00:11:51.400 ************************************ 00:11:51.400 END TEST accel_copy 00:11:51.400 ************************************ 00:11:51.400 00:31:24 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:51.400 00:31:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:51.400 00:31:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.400 00:31:24 -- common/autotest_common.sh@10 -- # set +x 00:11:51.400 ************************************ 00:11:51.400 START TEST accel_fill 00:11:51.400 ************************************ 00:11:51.400 00:31:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:51.400 00:31:24 -- accel/accel.sh@16 -- # local accel_opc 00:11:51.400 00:31:24 -- accel/accel.sh@17 -- # local accel_module 00:11:51.400 00:31:24 -- accel/accel.sh@19 -- # IFS=: 00:11:51.400 00:31:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:51.401 00:31:24 -- accel/accel.sh@19 -- # read -r var val 00:11:51.401 00:31:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:51.401 00:31:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:51.401 00:31:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:51.401 00:31:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:51.401 00:31:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:51.401 00:31:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:51.401 00:31:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:51.401 00:31:24 -- accel/accel.sh@40 -- # local IFS=, 00:11:51.401 00:31:24 -- accel/accel.sh@41 -- # jq -r . 00:11:51.401 [2024-04-27 00:31:24.943785] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:51.401 [2024-04-27 00:31:24.944014] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114392 ] 00:11:51.659 [2024-04-27 00:31:25.126144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.917 [2024-04-27 00:31:25.366239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val= 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val= 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val=0x1 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val= 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val= 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val=fill 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@23 -- # accel_opc=fill 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val=0x80 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val= 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val=software 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@22 -- # accel_module=software 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val=64 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val=64 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val=1 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.176 00:31:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:52.176 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.176 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.177 00:31:25 -- accel/accel.sh@20 -- # val=Yes 00:11:52.177 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.177 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.177 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.177 00:31:25 -- accel/accel.sh@20 -- # val= 00:11:52.177 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.177 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.177 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:52.177 00:31:25 -- accel/accel.sh@20 -- # val= 00:11:52.177 00:31:25 -- accel/accel.sh@21 -- # case "$var" in 00:11:52.177 00:31:25 -- accel/accel.sh@19 -- # IFS=: 00:11:52.177 00:31:25 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@20 -- # val= 00:11:54.162 00:31:27 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # IFS=: 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@20 -- # val= 00:11:54.162 00:31:27 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # IFS=: 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@20 -- # val= 00:11:54.162 00:31:27 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # IFS=: 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@20 -- # val= 00:11:54.162 00:31:27 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # IFS=: 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@20 -- # val= 00:11:54.162 00:31:27 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # IFS=: 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@20 -- # val= 00:11:54.162 00:31:27 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # IFS=: 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:54.162 00:31:27 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:11:54.162 00:31:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:54.162 00:11:54.162 real 0m2.561s 00:11:54.162 user 0m2.287s 00:11:54.162 sys 0m0.195s 00:11:54.162 00:31:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.162 ************************************ 00:11:54.162 END TEST accel_fill 00:11:54.162 ************************************ 00:11:54.162 00:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.162 00:31:27 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:54.162 00:31:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:54.162 00:31:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.162 00:31:27 -- common/autotest_common.sh@10 -- # set +x 00:11:54.162 ************************************ 00:11:54.162 START TEST accel_copy_crc32c 00:11:54.162 ************************************ 00:11:54.162 00:31:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:11:54.162 00:31:27 -- accel/accel.sh@16 -- # local accel_opc 00:11:54.162 00:31:27 -- accel/accel.sh@17 -- # local accel_module 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # IFS=: 00:11:54.162 00:31:27 -- accel/accel.sh@19 -- # read -r var val 00:11:54.162 00:31:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:54.162 00:31:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:54.162 00:31:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:54.162 00:31:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:54.162 00:31:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:54.162 00:31:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.162 00:31:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.162 00:31:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:54.162 00:31:27 -- accel/accel.sh@40 -- # local IFS=, 00:11:54.162 00:31:27 -- accel/accel.sh@41 -- # jq -r . 00:11:54.162 [2024-04-27 00:31:27.575747] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:54.162 [2024-04-27 00:31:27.575896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114452 ] 00:11:54.162 [2024-04-27 00:31:27.736106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.446 [2024-04-27 00:31:28.004933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.704 00:31:28 -- accel/accel.sh@20 -- # val= 00:11:54.704 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.704 00:31:28 -- accel/accel.sh@20 -- # val= 00:11:54.704 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.704 00:31:28 -- accel/accel.sh@20 -- # val=0x1 00:11:54.704 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.704 00:31:28 -- accel/accel.sh@20 -- # val= 00:11:54.704 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.704 00:31:28 -- accel/accel.sh@20 -- # val= 00:11:54.704 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.704 00:31:28 -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:54.704 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.704 00:31:28 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:54.704 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val=0 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val= 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val=software 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@22 -- # accel_module=software 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val=32 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val=32 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val=1 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val=Yes 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val= 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:54.705 00:31:28 -- accel/accel.sh@20 -- # val= 00:11:54.705 00:31:28 -- accel/accel.sh@21 -- # case "$var" in 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # IFS=: 00:11:54.705 00:31:28 -- accel/accel.sh@19 -- # read -r var val 00:11:56.606 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:56.606 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:56.606 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:56.606 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:56.606 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:56.606 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:56.606 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:56.606 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:56.606 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:56.606 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:56.606 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:56.606 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:56.606 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:56.606 00:31:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:56.606 00:31:30 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:56.606 00:31:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:56.606 00:11:56.606 real 0m2.598s 00:11:56.606 user 0m2.316s 00:11:56.606 sys 0m0.208s 00:11:56.606 ************************************ 00:11:56.606 END TEST accel_copy_crc32c 00:11:56.606 ************************************ 00:11:56.606 00:31:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:56.606 00:31:30 -- common/autotest_common.sh@10 -- # set +x 00:11:56.606 00:31:30 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:56.606 00:31:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:56.606 00:31:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:56.606 00:31:30 -- common/autotest_common.sh@10 -- # set +x 00:11:56.864 ************************************ 00:11:56.864 START TEST accel_copy_crc32c_C2 00:11:56.864 ************************************ 00:11:56.864 00:31:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:56.864 00:31:30 -- accel/accel.sh@16 -- # local accel_opc 00:11:56.864 00:31:30 -- accel/accel.sh@17 -- # local accel_module 00:11:56.864 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:56.864 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:56.864 00:31:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:56.864 00:31:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:56.864 00:31:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:56.864 00:31:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:56.864 00:31:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:56.864 00:31:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:56.864 00:31:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:56.864 00:31:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:56.864 00:31:30 -- accel/accel.sh@40 -- # local IFS=, 00:11:56.864 00:31:30 -- accel/accel.sh@41 -- # jq -r . 00:11:56.864 [2024-04-27 00:31:30.261113] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:56.864 [2024-04-27 00:31:30.261291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114514 ] 00:11:56.864 [2024-04-27 00:31:30.430195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.431 [2024-04-27 00:31:30.715215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val=0x1 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val=0 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val='8192 bytes' 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.431 00:31:30 -- accel/accel.sh@20 -- # val=software 00:11:57.431 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.431 00:31:30 -- accel/accel.sh@22 -- # accel_module=software 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.431 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.432 00:31:30 -- accel/accel.sh@20 -- # val=32 00:11:57.432 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.432 00:31:30 -- accel/accel.sh@20 -- # val=32 00:11:57.432 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.432 00:31:30 -- accel/accel.sh@20 -- # val=1 00:11:57.432 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.432 00:31:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:57.432 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.432 00:31:30 -- accel/accel.sh@20 -- # val=Yes 00:11:57.432 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.432 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:57.432 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:57.432 00:31:30 -- accel/accel.sh@20 -- # val= 00:11:57.432 00:31:30 -- accel/accel.sh@21 -- # case "$var" in 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # IFS=: 00:11:57.432 00:31:30 -- accel/accel.sh@19 -- # read -r var val 00:11:59.335 00:31:32 -- accel/accel.sh@20 -- # val= 00:11:59.335 00:31:32 -- accel/accel.sh@21 -- # case "$var" in 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # IFS=: 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # read -r var val 00:11:59.335 00:31:32 -- accel/accel.sh@20 -- # val= 00:11:59.335 00:31:32 -- accel/accel.sh@21 -- # case "$var" in 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # IFS=: 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # read -r var val 00:11:59.335 00:31:32 -- accel/accel.sh@20 -- # val= 00:11:59.335 00:31:32 -- accel/accel.sh@21 -- # case "$var" in 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # IFS=: 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # read -r var val 00:11:59.335 00:31:32 -- accel/accel.sh@20 -- # val= 00:11:59.335 00:31:32 -- accel/accel.sh@21 -- # case "$var" in 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # IFS=: 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # read -r var val 00:11:59.335 00:31:32 -- accel/accel.sh@20 -- # val= 00:11:59.335 00:31:32 -- accel/accel.sh@21 -- # case "$var" in 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # IFS=: 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # read -r var val 00:11:59.335 00:31:32 -- accel/accel.sh@20 -- # val= 00:11:59.335 00:31:32 -- accel/accel.sh@21 -- # case "$var" in 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # IFS=: 00:11:59.335 00:31:32 -- accel/accel.sh@19 -- # read -r var val 00:11:59.335 00:31:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:59.335 00:31:32 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:59.335 00:31:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:59.335 00:11:59.335 real 0m2.614s 00:11:59.335 user 0m2.336s 00:11:59.335 sys 0m0.197s 00:11:59.335 00:31:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.335 ************************************ 00:11:59.335 END TEST accel_copy_crc32c_C2 00:11:59.335 ************************************ 00:11:59.335 00:31:32 -- common/autotest_common.sh@10 -- # set +x 00:11:59.335 00:31:32 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:59.335 00:31:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:59.335 00:31:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.335 00:31:32 -- common/autotest_common.sh@10 -- # set +x 00:11:59.335 ************************************ 00:11:59.335 START TEST accel_dualcast 00:11:59.335 ************************************ 00:11:59.335 00:31:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:11:59.335 00:31:32 -- accel/accel.sh@16 -- # local accel_opc 00:11:59.335 00:31:32 -- accel/accel.sh@17 -- # local accel_module 00:11:59.593 00:31:32 -- accel/accel.sh@19 -- # IFS=: 00:11:59.593 00:31:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:59.593 00:31:32 -- accel/accel.sh@19 -- # read -r var val 00:11:59.593 00:31:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:59.593 00:31:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:59.593 00:31:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:59.593 00:31:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:59.593 00:31:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:59.593 00:31:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:59.593 00:31:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:59.593 00:31:32 -- accel/accel.sh@40 -- # local IFS=, 00:11:59.593 00:31:32 -- accel/accel.sh@41 -- # jq -r . 00:11:59.593 [2024-04-27 00:31:32.964102] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:11:59.593 [2024-04-27 00:31:32.964288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114581 ] 00:11:59.593 [2024-04-27 00:31:33.131977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.851 [2024-04-27 00:31:33.335479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.109 00:31:33 -- accel/accel.sh@20 -- # val= 00:12:00.109 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.109 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.109 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.109 00:31:33 -- accel/accel.sh@20 -- # val= 00:12:00.109 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.109 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.109 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.109 00:31:33 -- accel/accel.sh@20 -- # val=0x1 00:12:00.109 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val= 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val= 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val=dualcast 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val= 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val=software 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@22 -- # accel_module=software 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val=32 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val=32 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val=1 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val=Yes 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val= 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:00.110 00:31:33 -- accel/accel.sh@20 -- # val= 00:12:00.110 00:31:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # IFS=: 00:12:00.110 00:31:33 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@20 -- # val= 00:12:02.015 00:31:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # IFS=: 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@20 -- # val= 00:12:02.015 00:31:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # IFS=: 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@20 -- # val= 00:12:02.015 00:31:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # IFS=: 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@20 -- # val= 00:12:02.015 00:31:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # IFS=: 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@20 -- # val= 00:12:02.015 00:31:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # IFS=: 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@20 -- # val= 00:12:02.015 00:31:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # IFS=: 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:02.015 00:31:35 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:02.015 00:31:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:02.015 00:12:02.015 real 0m2.389s 00:12:02.015 user 0m2.129s 00:12:02.015 sys 0m0.177s 00:12:02.015 00:31:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:02.015 ************************************ 00:12:02.015 END TEST accel_dualcast 00:12:02.015 ************************************ 00:12:02.015 00:31:35 -- common/autotest_common.sh@10 -- # set +x 00:12:02.015 00:31:35 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:02.015 00:31:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:02.015 00:31:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:02.015 00:31:35 -- common/autotest_common.sh@10 -- # set +x 00:12:02.015 ************************************ 00:12:02.015 START TEST accel_compare 00:12:02.015 ************************************ 00:12:02.015 00:31:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:12:02.015 00:31:35 -- accel/accel.sh@16 -- # local accel_opc 00:12:02.015 00:31:35 -- accel/accel.sh@17 -- # local accel_module 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # IFS=: 00:12:02.015 00:31:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:02.015 00:31:35 -- accel/accel.sh@19 -- # read -r var val 00:12:02.015 00:31:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:02.015 00:31:35 -- accel/accel.sh@12 -- # build_accel_config 00:12:02.015 00:31:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.015 00:31:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.015 00:31:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.015 00:31:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.015 00:31:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.015 00:31:35 -- accel/accel.sh@40 -- # local IFS=, 00:12:02.015 00:31:35 -- accel/accel.sh@41 -- # jq -r . 00:12:02.015 [2024-04-27 00:31:35.432990] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:02.015 [2024-04-27 00:31:35.433155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114636 ] 00:12:02.274 [2024-04-27 00:31:35.605563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.274 [2024-04-27 00:31:35.813426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.532 00:31:36 -- accel/accel.sh@20 -- # val= 00:12:02.532 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.532 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.532 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.532 00:31:36 -- accel/accel.sh@20 -- # val= 00:12:02.532 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.532 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.532 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.532 00:31:36 -- accel/accel.sh@20 -- # val=0x1 00:12:02.532 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val= 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val= 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val=compare 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@23 -- # accel_opc=compare 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val= 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val=software 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@22 -- # accel_module=software 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val=32 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val=32 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val=1 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val=Yes 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val= 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:02.533 00:31:36 -- accel/accel.sh@20 -- # val= 00:12:02.533 00:31:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # IFS=: 00:12:02.533 00:31:36 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@20 -- # val= 00:12:04.436 00:31:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # IFS=: 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@20 -- # val= 00:12:04.436 00:31:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # IFS=: 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@20 -- # val= 00:12:04.436 00:31:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # IFS=: 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@20 -- # val= 00:12:04.436 00:31:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # IFS=: 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@20 -- # val= 00:12:04.436 00:31:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # IFS=: 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@20 -- # val= 00:12:04.436 00:31:37 -- accel/accel.sh@21 -- # case "$var" in 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # IFS=: 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:04.436 00:31:37 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:04.436 00:31:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:04.436 00:12:04.436 real 0m2.506s 00:12:04.436 user 0m2.230s 00:12:04.436 sys 0m0.184s 00:12:04.436 00:31:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:04.436 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 ************************************ 00:12:04.436 END TEST accel_compare 00:12:04.436 ************************************ 00:12:04.436 00:31:37 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:04.436 00:31:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:04.436 00:31:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:04.436 00:31:37 -- common/autotest_common.sh@10 -- # set +x 00:12:04.436 ************************************ 00:12:04.436 START TEST accel_xor 00:12:04.436 ************************************ 00:12:04.436 00:31:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:12:04.436 00:31:37 -- accel/accel.sh@16 -- # local accel_opc 00:12:04.436 00:31:37 -- accel/accel.sh@17 -- # local accel_module 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # IFS=: 00:12:04.436 00:31:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:04.436 00:31:37 -- accel/accel.sh@19 -- # read -r var val 00:12:04.436 00:31:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:04.436 00:31:37 -- accel/accel.sh@12 -- # build_accel_config 00:12:04.436 00:31:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:04.436 00:31:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:04.436 00:31:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:04.436 00:31:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:04.436 00:31:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:04.436 00:31:37 -- accel/accel.sh@40 -- # local IFS=, 00:12:04.436 00:31:37 -- accel/accel.sh@41 -- # jq -r . 00:12:04.436 [2024-04-27 00:31:38.019220] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:04.436 [2024-04-27 00:31:38.019425] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114700 ] 00:12:04.695 [2024-04-27 00:31:38.189530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.954 [2024-04-27 00:31:38.422491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val= 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val= 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=0x1 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val= 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val= 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=xor 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=2 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val= 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=software 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@22 -- # accel_module=software 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=32 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=32 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=1 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val=Yes 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val= 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:05.213 00:31:38 -- accel/accel.sh@20 -- # val= 00:12:05.213 00:31:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # IFS=: 00:12:05.213 00:31:38 -- accel/accel.sh@19 -- # read -r var val 00:12:07.115 00:31:40 -- accel/accel.sh@20 -- # val= 00:12:07.115 00:31:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.115 00:31:40 -- accel/accel.sh@19 -- # IFS=: 00:12:07.115 00:31:40 -- accel/accel.sh@19 -- # read -r var val 00:12:07.115 00:31:40 -- accel/accel.sh@20 -- # val= 00:12:07.116 00:31:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # IFS=: 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # read -r var val 00:12:07.116 00:31:40 -- accel/accel.sh@20 -- # val= 00:12:07.116 00:31:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # IFS=: 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # read -r var val 00:12:07.116 00:31:40 -- accel/accel.sh@20 -- # val= 00:12:07.116 00:31:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # IFS=: 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # read -r var val 00:12:07.116 00:31:40 -- accel/accel.sh@20 -- # val= 00:12:07.116 00:31:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # IFS=: 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # read -r var val 00:12:07.116 00:31:40 -- accel/accel.sh@20 -- # val= 00:12:07.116 00:31:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # IFS=: 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # read -r var val 00:12:07.116 00:31:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:07.116 00:31:40 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:07.116 00:31:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.116 00:12:07.116 real 0m2.540s 00:12:07.116 user 0m2.281s 00:12:07.116 sys 0m0.186s 00:12:07.116 00:31:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.116 00:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:07.116 ************************************ 00:12:07.116 END TEST accel_xor 00:12:07.116 ************************************ 00:12:07.116 00:31:40 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:07.116 00:31:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:07.116 00:31:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:07.116 00:31:40 -- common/autotest_common.sh@10 -- # set +x 00:12:07.116 ************************************ 00:12:07.116 START TEST accel_xor 00:12:07.116 ************************************ 00:12:07.116 00:31:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:12:07.116 00:31:40 -- accel/accel.sh@16 -- # local accel_opc 00:12:07.116 00:31:40 -- accel/accel.sh@17 -- # local accel_module 00:12:07.116 00:31:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # IFS=: 00:12:07.116 00:31:40 -- accel/accel.sh@19 -- # read -r var val 00:12:07.116 00:31:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:07.116 00:31:40 -- accel/accel.sh@12 -- # build_accel_config 00:12:07.116 00:31:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:07.116 00:31:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:07.116 00:31:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.116 00:31:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.116 00:31:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:07.116 00:31:40 -- accel/accel.sh@40 -- # local IFS=, 00:12:07.116 00:31:40 -- accel/accel.sh@41 -- # jq -r . 00:12:07.116 [2024-04-27 00:31:40.644149] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:07.116 [2024-04-27 00:31:40.644327] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114760 ] 00:12:07.374 [2024-04-27 00:31:40.816948] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.632 [2024-04-27 00:31:41.087386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val= 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val= 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=0x1 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val= 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val= 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=xor 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=3 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val= 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=software 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@22 -- # accel_module=software 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=32 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=32 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=1 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val=Yes 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.893 00:31:41 -- accel/accel.sh@20 -- # val= 00:12:07.893 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.893 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:07.894 00:31:41 -- accel/accel.sh@20 -- # val= 00:12:07.894 00:31:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:07.894 00:31:41 -- accel/accel.sh@19 -- # IFS=: 00:12:07.894 00:31:41 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:09.865 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:09.865 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:09.865 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:09.865 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:09.865 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:09.865 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:09.865 00:31:43 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:09.865 00:31:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:09.865 00:12:09.865 real 0m2.579s 00:12:09.865 user 0m2.308s 00:12:09.865 sys 0m0.198s 00:12:09.865 00:31:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:09.865 ************************************ 00:12:09.865 END TEST accel_xor 00:12:09.865 ************************************ 00:12:09.865 00:31:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.865 00:31:43 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:09.865 00:31:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:09.865 00:31:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.865 00:31:43 -- common/autotest_common.sh@10 -- # set +x 00:12:09.865 ************************************ 00:12:09.865 START TEST accel_dif_verify 00:12:09.865 ************************************ 00:12:09.865 00:31:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:12:09.865 00:31:43 -- accel/accel.sh@16 -- # local accel_opc 00:12:09.865 00:31:43 -- accel/accel.sh@17 -- # local accel_module 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:09.865 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:09.865 00:31:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:09.865 00:31:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:09.865 00:31:43 -- accel/accel.sh@12 -- # build_accel_config 00:12:09.865 00:31:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:09.865 00:31:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:09.865 00:31:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:09.865 00:31:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:09.865 00:31:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:09.865 00:31:43 -- accel/accel.sh@40 -- # local IFS=, 00:12:09.865 00:31:43 -- accel/accel.sh@41 -- # jq -r . 00:12:09.865 [2024-04-27 00:31:43.308225] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:09.865 [2024-04-27 00:31:43.308468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114820 ] 00:12:10.123 [2024-04-27 00:31:43.491257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.382 [2024-04-27 00:31:43.728448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val=0x1 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val=dif_verify 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val=software 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@22 -- # accel_module=software 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val=32 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val=32 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val=1 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val=No 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:10.382 00:31:43 -- accel/accel.sh@20 -- # val= 00:12:10.382 00:31:43 -- accel/accel.sh@21 -- # case "$var" in 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # IFS=: 00:12:10.382 00:31:43 -- accel/accel.sh@19 -- # read -r var val 00:12:12.282 00:31:45 -- accel/accel.sh@20 -- # val= 00:12:12.282 00:31:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # IFS=: 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # read -r var val 00:12:12.282 00:31:45 -- accel/accel.sh@20 -- # val= 00:12:12.282 00:31:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # IFS=: 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # read -r var val 00:12:12.282 00:31:45 -- accel/accel.sh@20 -- # val= 00:12:12.282 00:31:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # IFS=: 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # read -r var val 00:12:12.282 00:31:45 -- accel/accel.sh@20 -- # val= 00:12:12.282 00:31:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # IFS=: 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # read -r var val 00:12:12.282 00:31:45 -- accel/accel.sh@20 -- # val= 00:12:12.282 00:31:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # IFS=: 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # read -r var val 00:12:12.282 00:31:45 -- accel/accel.sh@20 -- # val= 00:12:12.282 00:31:45 -- accel/accel.sh@21 -- # case "$var" in 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # IFS=: 00:12:12.282 00:31:45 -- accel/accel.sh@19 -- # read -r var val 00:12:12.282 00:31:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:12.282 00:31:45 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:12.282 00:31:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.282 00:12:12.282 real 0m2.555s 00:12:12.282 user 0m2.279s 00:12:12.282 sys 0m0.180s 00:12:12.282 00:31:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:12.282 ************************************ 00:12:12.282 END TEST accel_dif_verify 00:12:12.282 ************************************ 00:12:12.282 00:31:45 -- common/autotest_common.sh@10 -- # set +x 00:12:12.282 00:31:45 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:12.282 00:31:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:12.282 00:31:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.282 00:31:45 -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 ************************************ 00:12:12.541 START TEST accel_dif_generate 00:12:12.541 ************************************ 00:12:12.541 00:31:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:12:12.541 00:31:45 -- accel/accel.sh@16 -- # local accel_opc 00:12:12.541 00:31:45 -- accel/accel.sh@17 -- # local accel_module 00:12:12.541 00:31:45 -- accel/accel.sh@19 -- # IFS=: 00:12:12.541 00:31:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:12.541 00:31:45 -- accel/accel.sh@19 -- # read -r var val 00:12:12.541 00:31:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:12.541 00:31:45 -- accel/accel.sh@12 -- # build_accel_config 00:12:12.541 00:31:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:12.541 00:31:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:12.541 00:31:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.541 00:31:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.541 00:31:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:12.541 00:31:45 -- accel/accel.sh@40 -- # local IFS=, 00:12:12.541 00:31:45 -- accel/accel.sh@41 -- # jq -r . 00:12:12.541 [2024-04-27 00:31:45.935921] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:12.541 [2024-04-27 00:31:45.936336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114881 ] 00:12:12.541 [2024-04-27 00:31:46.109057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.799 [2024-04-27 00:31:46.342592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val= 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val= 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val=0x1 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val= 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val= 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val=dif_generate 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val= 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val=software 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@22 -- # accel_module=software 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val=32 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val=32 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val=1 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val=No 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val= 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:13.058 00:31:46 -- accel/accel.sh@20 -- # val= 00:12:13.058 00:31:46 -- accel/accel.sh@21 -- # case "$var" in 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # IFS=: 00:12:13.058 00:31:46 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@20 -- # val= 00:12:14.962 00:31:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # IFS=: 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@20 -- # val= 00:12:14.962 00:31:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # IFS=: 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@20 -- # val= 00:12:14.962 00:31:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # IFS=: 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@20 -- # val= 00:12:14.962 00:31:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # IFS=: 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@20 -- # val= 00:12:14.962 00:31:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # IFS=: 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@20 -- # val= 00:12:14.962 00:31:48 -- accel/accel.sh@21 -- # case "$var" in 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # IFS=: 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:14.962 00:31:48 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:14.962 00:31:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:14.962 00:12:14.962 real 0m2.422s 00:12:14.962 user 0m2.160s 00:12:14.962 sys 0m0.183s 00:12:14.962 00:31:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:14.962 ************************************ 00:12:14.962 END TEST accel_dif_generate 00:12:14.962 ************************************ 00:12:14.962 00:31:48 -- common/autotest_common.sh@10 -- # set +x 00:12:14.962 00:31:48 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:14.962 00:31:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:14.962 00:31:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.962 00:31:48 -- common/autotest_common.sh@10 -- # set +x 00:12:14.962 ************************************ 00:12:14.962 START TEST accel_dif_generate_copy 00:12:14.962 ************************************ 00:12:14.962 00:31:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:12:14.962 00:31:48 -- accel/accel.sh@16 -- # local accel_opc 00:12:14.962 00:31:48 -- accel/accel.sh@17 -- # local accel_module 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # IFS=: 00:12:14.962 00:31:48 -- accel/accel.sh@19 -- # read -r var val 00:12:14.962 00:31:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:14.962 00:31:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:14.962 00:31:48 -- accel/accel.sh@12 -- # build_accel_config 00:12:14.962 00:31:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:14.962 00:31:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:14.962 00:31:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:14.962 00:31:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:14.962 00:31:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:14.962 00:31:48 -- accel/accel.sh@40 -- # local IFS=, 00:12:14.962 00:31:48 -- accel/accel.sh@41 -- # jq -r . 00:12:14.962 [2024-04-27 00:31:48.446849] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:14.962 [2024-04-27 00:31:48.447036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114944 ] 00:12:15.221 [2024-04-27 00:31:48.617054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.480 [2024-04-27 00:31:48.812832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val= 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val= 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val=0x1 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val= 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val= 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val= 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val=software 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@22 -- # accel_module=software 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val=32 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val=32 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val=1 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val=No 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val= 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:15.480 00:31:49 -- accel/accel.sh@20 -- # val= 00:12:15.480 00:31:49 -- accel/accel.sh@21 -- # case "$var" in 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # IFS=: 00:12:15.480 00:31:49 -- accel/accel.sh@19 -- # read -r var val 00:12:17.406 00:31:50 -- accel/accel.sh@20 -- # val= 00:12:17.406 00:31:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # IFS=: 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # read -r var val 00:12:17.406 00:31:50 -- accel/accel.sh@20 -- # val= 00:12:17.406 00:31:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # IFS=: 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # read -r var val 00:12:17.406 00:31:50 -- accel/accel.sh@20 -- # val= 00:12:17.406 00:31:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # IFS=: 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # read -r var val 00:12:17.406 00:31:50 -- accel/accel.sh@20 -- # val= 00:12:17.406 00:31:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # IFS=: 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # read -r var val 00:12:17.406 00:31:50 -- accel/accel.sh@20 -- # val= 00:12:17.406 00:31:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # IFS=: 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # read -r var val 00:12:17.406 00:31:50 -- accel/accel.sh@20 -- # val= 00:12:17.406 00:31:50 -- accel/accel.sh@21 -- # case "$var" in 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # IFS=: 00:12:17.406 00:31:50 -- accel/accel.sh@19 -- # read -r var val 00:12:17.406 00:31:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:17.406 00:31:50 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:17.406 00:31:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:17.406 00:12:17.406 real 0m2.403s 00:12:17.406 user 0m2.134s 00:12:17.406 sys 0m0.188s 00:12:17.406 ************************************ 00:12:17.406 END TEST accel_dif_generate_copy 00:12:17.406 ************************************ 00:12:17.406 00:31:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:17.406 00:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 00:31:50 -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:17.406 00:31:50 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:17.406 00:31:50 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:17.406 00:31:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.406 00:31:50 -- common/autotest_common.sh@10 -- # set +x 00:12:17.406 ************************************ 00:12:17.406 START TEST accel_comp 00:12:17.406 ************************************ 00:12:17.406 00:31:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:17.407 00:31:50 -- accel/accel.sh@16 -- # local accel_opc 00:12:17.407 00:31:50 -- accel/accel.sh@17 -- # local accel_module 00:12:17.407 00:31:50 -- accel/accel.sh@19 -- # IFS=: 00:12:17.407 00:31:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:17.407 00:31:50 -- accel/accel.sh@19 -- # read -r var val 00:12:17.407 00:31:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:17.407 00:31:50 -- accel/accel.sh@12 -- # build_accel_config 00:12:17.407 00:31:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:17.407 00:31:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:17.407 00:31:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:17.407 00:31:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:17.407 00:31:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:17.407 00:31:50 -- accel/accel.sh@40 -- # local IFS=, 00:12:17.407 00:31:50 -- accel/accel.sh@41 -- # jq -r . 00:12:17.407 [2024-04-27 00:31:50.940876] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:17.407 [2024-04-27 00:31:50.941060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115000 ] 00:12:17.666 [2024-04-27 00:31:51.107099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.925 [2024-04-27 00:31:51.319026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.183 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.183 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.183 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.183 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.183 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.183 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.183 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.183 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.183 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.183 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=0x1 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=compress 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@23 -- # accel_opc=compress 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=software 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@22 -- # accel_module=software 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=32 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=32 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=1 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val=No 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:18.184 00:31:51 -- accel/accel.sh@20 -- # val= 00:12:18.184 00:31:51 -- accel/accel.sh@21 -- # case "$var" in 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # IFS=: 00:12:18.184 00:31:51 -- accel/accel.sh@19 -- # read -r var val 00:12:20.086 00:31:53 -- accel/accel.sh@20 -- # val= 00:12:20.086 00:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:20.086 00:31:53 -- accel/accel.sh@20 -- # val= 00:12:20.086 00:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:20.086 00:31:53 -- accel/accel.sh@20 -- # val= 00:12:20.086 00:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:20.086 00:31:53 -- accel/accel.sh@20 -- # val= 00:12:20.086 00:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:20.086 00:31:53 -- accel/accel.sh@20 -- # val= 00:12:20.086 00:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:20.086 00:31:53 -- accel/accel.sh@20 -- # val= 00:12:20.086 00:31:53 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:20.086 00:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:20.086 00:31:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:20.086 00:31:53 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:20.087 00:31:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:20.087 00:12:20.087 real 0m2.468s 00:12:20.087 user 0m2.186s 00:12:20.087 sys 0m0.202s 00:12:20.087 00:31:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:20.087 ************************************ 00:12:20.087 END TEST accel_comp 00:12:20.087 00:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:20.087 ************************************ 00:12:20.087 00:31:53 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:20.087 00:31:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:20.087 00:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:20.087 00:31:53 -- common/autotest_common.sh@10 -- # set +x 00:12:20.087 ************************************ 00:12:20.087 START TEST accel_decomp 00:12:20.087 ************************************ 00:12:20.087 00:31:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:20.087 00:31:53 -- accel/accel.sh@16 -- # local accel_opc 00:12:20.087 00:31:53 -- accel/accel.sh@17 -- # local accel_module 00:12:20.087 00:31:53 -- accel/accel.sh@19 -- # IFS=: 00:12:20.087 00:31:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:20.087 00:31:53 -- accel/accel.sh@19 -- # read -r var val 00:12:20.087 00:31:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:20.087 00:31:53 -- accel/accel.sh@12 -- # build_accel_config 00:12:20.087 00:31:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:20.087 00:31:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:20.087 00:31:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:20.087 00:31:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:20.087 00:31:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:20.087 00:31:53 -- accel/accel.sh@40 -- # local IFS=, 00:12:20.087 00:31:53 -- accel/accel.sh@41 -- # jq -r . 00:12:20.087 [2024-04-27 00:31:53.494512] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:20.087 [2024-04-27 00:31:53.494705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115062 ] 00:12:20.087 [2024-04-27 00:31:53.664849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.344 [2024-04-27 00:31:53.890448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=0x1 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=decompress 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=software 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@22 -- # accel_module=software 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=32 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=32 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=1 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val=Yes 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:20.603 00:31:54 -- accel/accel.sh@20 -- # val= 00:12:20.603 00:31:54 -- accel/accel.sh@21 -- # case "$var" in 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # IFS=: 00:12:20.603 00:31:54 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:55 -- accel/accel.sh@20 -- # val= 00:12:22.507 00:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:55 -- accel/accel.sh@20 -- # val= 00:12:22.507 00:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:55 -- accel/accel.sh@20 -- # val= 00:12:22.507 00:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:55 -- accel/accel.sh@20 -- # val= 00:12:22.507 00:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:55 -- accel/accel.sh@20 -- # val= 00:12:22.507 00:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:55 -- accel/accel.sh@20 -- # val= 00:12:22.507 00:31:55 -- accel/accel.sh@21 -- # case "$var" in 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # IFS=: 00:12:22.507 00:31:55 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:22.507 00:31:55 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:22.507 00:31:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:22.507 00:12:22.507 real 0m2.472s 00:12:22.507 user 0m2.212s 00:12:22.507 sys 0m0.184s 00:12:22.507 00:31:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:22.507 ************************************ 00:12:22.507 END TEST accel_decomp 00:12:22.507 ************************************ 00:12:22.507 00:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:22.507 00:31:55 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:22.507 00:31:55 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:22.507 00:31:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.507 00:31:55 -- common/autotest_common.sh@10 -- # set +x 00:12:22.507 ************************************ 00:12:22.507 START TEST accel_decmop_full 00:12:22.507 ************************************ 00:12:22.507 00:31:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:22.507 00:31:56 -- accel/accel.sh@16 -- # local accel_opc 00:12:22.507 00:31:56 -- accel/accel.sh@17 -- # local accel_module 00:12:22.507 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:22.507 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:22.507 00:31:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:22.507 00:31:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:22.507 00:31:56 -- accel/accel.sh@12 -- # build_accel_config 00:12:22.507 00:31:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:22.507 00:31:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:22.507 00:31:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:22.507 00:31:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:22.507 00:31:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:22.507 00:31:56 -- accel/accel.sh@40 -- # local IFS=, 00:12:22.507 00:31:56 -- accel/accel.sh@41 -- # jq -r . 00:12:22.507 [2024-04-27 00:31:56.050307] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:22.507 [2024-04-27 00:31:56.050545] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115122 ] 00:12:22.766 [2024-04-27 00:31:56.218933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.023 [2024-04-27 00:31:56.430003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=0x1 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=decompress 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=software 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@22 -- # accel_module=software 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=32 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=32 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=1 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val=Yes 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:23.282 00:31:56 -- accel/accel.sh@20 -- # val= 00:12:23.282 00:31:56 -- accel/accel.sh@21 -- # case "$var" in 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # IFS=: 00:12:23.282 00:31:56 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@20 -- # val= 00:12:25.208 00:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@20 -- # val= 00:12:25.208 00:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@20 -- # val= 00:12:25.208 00:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@20 -- # val= 00:12:25.208 00:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@20 -- # val= 00:12:25.208 00:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@20 -- # val= 00:12:25.208 00:31:58 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:25.208 00:31:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:25.208 00:31:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:25.208 00:12:25.208 real 0m2.421s 00:12:25.208 user 0m2.128s 00:12:25.208 sys 0m0.218s 00:12:25.208 00:31:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:25.208 ************************************ 00:12:25.208 END TEST accel_decmop_full 00:12:25.208 00:31:58 -- common/autotest_common.sh@10 -- # set +x 00:12:25.208 ************************************ 00:12:25.208 00:31:58 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:25.208 00:31:58 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:25.208 00:31:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.208 00:31:58 -- common/autotest_common.sh@10 -- # set +x 00:12:25.208 ************************************ 00:12:25.208 START TEST accel_decomp_mcore 00:12:25.208 ************************************ 00:12:25.208 00:31:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:25.208 00:31:58 -- accel/accel.sh@16 -- # local accel_opc 00:12:25.208 00:31:58 -- accel/accel.sh@17 -- # local accel_module 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # IFS=: 00:12:25.208 00:31:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:25.208 00:31:58 -- accel/accel.sh@19 -- # read -r var val 00:12:25.208 00:31:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:25.208 00:31:58 -- accel/accel.sh@12 -- # build_accel_config 00:12:25.208 00:31:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:25.208 00:31:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:25.208 00:31:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:25.208 00:31:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:25.208 00:31:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:25.208 00:31:58 -- accel/accel.sh@40 -- # local IFS=, 00:12:25.208 00:31:58 -- accel/accel.sh@41 -- # jq -r . 00:12:25.208 [2024-04-27 00:31:58.558221] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:25.208 [2024-04-27 00:31:58.558407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115186 ] 00:12:25.208 [2024-04-27 00:31:58.744337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.467 [2024-04-27 00:31:58.963458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.467 [2024-04-27 00:31:58.963724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.467 [2024-04-27 00:31:58.963725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.467 [2024-04-27 00:31:58.964161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=0xf 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=decompress 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=software 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@22 -- # accel_module=software 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=32 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=32 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=1 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val=Yes 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:25.725 00:31:59 -- accel/accel.sh@20 -- # val= 00:12:25.725 00:31:59 -- accel/accel.sh@21 -- # case "$var" in 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # IFS=: 00:12:25.725 00:31:59 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:27.623 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:27.623 00:32:01 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:27.623 ************************************ 00:12:27.623 END TEST accel_decomp_mcore 00:12:27.623 ************************************ 00:12:27.623 00:32:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:27.623 00:12:27.623 real 0m2.509s 00:12:27.623 user 0m7.299s 00:12:27.623 sys 0m0.211s 00:12:27.623 00:32:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:27.623 00:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:27.623 00:32:01 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.623 00:32:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:27.623 00:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:27.623 00:32:01 -- common/autotest_common.sh@10 -- # set +x 00:12:27.623 ************************************ 00:12:27.623 START TEST accel_decomp_full_mcore 00:12:27.623 ************************************ 00:12:27.623 00:32:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.623 00:32:01 -- accel/accel.sh@16 -- # local accel_opc 00:12:27.623 00:32:01 -- accel/accel.sh@17 -- # local accel_module 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:27.623 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:27.623 00:32:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.623 00:32:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:27.623 00:32:01 -- accel/accel.sh@12 -- # build_accel_config 00:12:27.623 00:32:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:27.623 00:32:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:27.623 00:32:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:27.623 00:32:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:27.623 00:32:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:27.623 00:32:01 -- accel/accel.sh@40 -- # local IFS=, 00:12:27.623 00:32:01 -- accel/accel.sh@41 -- # jq -r . 00:12:27.623 [2024-04-27 00:32:01.151815] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:27.623 [2024-04-27 00:32:01.152007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115244 ] 00:12:27.882 [2024-04-27 00:32:01.350818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.140 [2024-04-27 00:32:01.564936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.140 [2024-04-27 00:32:01.565203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.140 [2024-04-27 00:32:01.565203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.140 [2024-04-27 00:32:01.565611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=0xf 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=decompress 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=software 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@22 -- # accel_module=software 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=32 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=32 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=1 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val=Yes 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:28.399 00:32:01 -- accel/accel.sh@20 -- # val= 00:12:28.399 00:32:01 -- accel/accel.sh@21 -- # case "$var" in 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # IFS=: 00:12:28.399 00:32:01 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@20 -- # val= 00:12:30.301 00:32:03 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:30.301 00:32:03 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:30.301 00:32:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:30.301 00:12:30.301 real 0m2.547s 00:12:30.301 user 0m7.397s 00:12:30.301 sys 0m0.236s 00:12:30.301 00:32:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.301 00:32:03 -- common/autotest_common.sh@10 -- # set +x 00:12:30.301 ************************************ 00:12:30.301 END TEST accel_decomp_full_mcore 00:12:30.301 ************************************ 00:12:30.301 00:32:03 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:30.301 00:32:03 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:30.301 00:32:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.301 00:32:03 -- common/autotest_common.sh@10 -- # set +x 00:12:30.301 ************************************ 00:12:30.301 START TEST accel_decomp_mthread 00:12:30.301 ************************************ 00:12:30.301 00:32:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:30.301 00:32:03 -- accel/accel.sh@16 -- # local accel_opc 00:12:30.301 00:32:03 -- accel/accel.sh@17 -- # local accel_module 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # IFS=: 00:12:30.301 00:32:03 -- accel/accel.sh@19 -- # read -r var val 00:12:30.301 00:32:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:30.301 00:32:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:30.301 00:32:03 -- accel/accel.sh@12 -- # build_accel_config 00:12:30.301 00:32:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:30.301 00:32:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:30.301 00:32:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:30.301 00:32:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:30.301 00:32:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:30.301 00:32:03 -- accel/accel.sh@40 -- # local IFS=, 00:12:30.301 00:32:03 -- accel/accel.sh@41 -- # jq -r . 00:12:30.301 [2024-04-27 00:32:03.768601] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:30.301 [2024-04-27 00:32:03.768782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115314 ] 00:12:30.568 [2024-04-27 00:32:03.929109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.842 [2024-04-27 00:32:04.152537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=0x1 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=decompress 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=software 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@22 -- # accel_module=software 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=32 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=32 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=2 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val=Yes 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:30.842 00:32:04 -- accel/accel.sh@20 -- # val= 00:12:30.842 00:32:04 -- accel/accel.sh@21 -- # case "$var" in 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # IFS=: 00:12:30.842 00:32:04 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:32.747 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:32.747 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:32.747 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:32.747 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:32.747 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:32.747 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:32.747 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.747 00:32:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:32.747 00:32:06 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:32.747 00:32:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:32.747 00:12:32.747 real 0m2.436s 00:12:32.747 user 0m2.148s 00:12:32.747 sys 0m0.213s 00:12:32.747 00:32:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:32.747 00:32:06 -- common/autotest_common.sh@10 -- # set +x 00:12:32.747 ************************************ 00:12:32.747 END TEST accel_decomp_mthread 00:12:32.747 ************************************ 00:12:32.747 00:32:06 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:32.747 00:32:06 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:32.747 00:32:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.747 00:32:06 -- common/autotest_common.sh@10 -- # set +x 00:12:32.747 ************************************ 00:12:32.747 START TEST accel_deomp_full_mthread 00:12:32.747 ************************************ 00:12:32.747 00:32:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:32.747 00:32:06 -- accel/accel.sh@16 -- # local accel_opc 00:12:32.747 00:32:06 -- accel/accel.sh@17 -- # local accel_module 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:32.747 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:32.748 00:32:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:32.748 00:32:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:32.748 00:32:06 -- accel/accel.sh@12 -- # build_accel_config 00:12:32.748 00:32:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:32.748 00:32:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:32.748 00:32:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:32.748 00:32:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:32.748 00:32:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:32.748 00:32:06 -- accel/accel.sh@40 -- # local IFS=, 00:12:32.748 00:32:06 -- accel/accel.sh@41 -- # jq -r . 00:12:32.748 [2024-04-27 00:32:06.293844] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:32.748 [2024-04-27 00:32:06.294566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115369 ] 00:12:33.007 [2024-04-27 00:32:06.464810] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.266 [2024-04-27 00:32:06.660763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val=0x1 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val=decompress 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val=software 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@22 -- # accel_module=software 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val=32 00:12:33.525 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.525 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.525 00:32:06 -- accel/accel.sh@20 -- # val=32 00:12:33.526 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.526 00:32:06 -- accel/accel.sh@20 -- # val=2 00:12:33.526 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.526 00:32:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:33.526 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.526 00:32:06 -- accel/accel.sh@20 -- # val=Yes 00:12:33.526 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.526 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.526 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:33.526 00:32:06 -- accel/accel.sh@20 -- # val= 00:12:33.526 00:32:06 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # IFS=: 00:12:33.526 00:32:06 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@20 -- # val= 00:12:35.459 00:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # IFS=: 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@20 -- # val= 00:12:35.459 00:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # IFS=: 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@20 -- # val= 00:12:35.459 00:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # IFS=: 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@20 -- # val= 00:12:35.459 00:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # IFS=: 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@20 -- # val= 00:12:35.459 00:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # IFS=: 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@20 -- # val= 00:12:35.459 00:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # IFS=: 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@20 -- # val= 00:12:35.459 00:32:08 -- accel/accel.sh@21 -- # case "$var" in 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # IFS=: 00:12:35.459 00:32:08 -- accel/accel.sh@19 -- # read -r var val 00:12:35.459 00:32:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.459 00:32:08 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:35.459 00:32:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.459 00:12:35.459 real 0m2.420s 00:12:35.459 user 0m2.151s 00:12:35.459 sys 0m0.204s 00:12:35.459 00:32:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:35.459 ************************************ 00:12:35.459 END TEST accel_deomp_full_mthread 00:12:35.459 00:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.459 ************************************ 00:12:35.459 00:32:08 -- accel/accel.sh@124 -- # [[ n == y ]] 00:12:35.459 00:32:08 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:35.459 00:32:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:35.459 00:32:08 -- accel/accel.sh@137 -- # build_accel_config 00:12:35.459 00:32:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.459 00:32:08 -- common/autotest_common.sh@10 -- # set +x 00:12:35.459 00:32:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:35.459 00:32:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:35.459 00:32:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.459 00:32:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:35.459 00:32:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:35.459 00:32:08 -- accel/accel.sh@40 -- # local IFS=, 00:12:35.459 00:32:08 -- accel/accel.sh@41 -- # jq -r . 00:12:35.459 ************************************ 00:12:35.459 START TEST accel_dif_functional_tests 00:12:35.459 ************************************ 00:12:35.459 00:32:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:35.459 [2024-04-27 00:32:08.830935] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:35.459 [2024-04-27 00:32:08.831604] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115433 ] 00:12:35.459 [2024-04-27 00:32:09.013517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:35.718 [2024-04-27 00:32:09.203500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.718 [2024-04-27 00:32:09.203910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.718 [2024-04-27 00:32:09.203939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.977 00:12:35.977 00:12:35.977 CUnit - A unit testing framework for C - Version 2.1-3 00:12:35.977 http://cunit.sourceforge.net/ 00:12:35.977 00:12:35.977 00:12:35.977 Suite: accel_dif 00:12:35.977 Test: verify: DIF generated, GUARD check ...passed 00:12:35.977 Test: verify: DIF generated, APPTAG check ...passed 00:12:35.977 Test: verify: DIF generated, REFTAG check ...passed 00:12:35.977 Test: verify: DIF not generated, GUARD check ...[2024-04-27 00:32:09.485433] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:35.977 passed 00:12:35.977 Test: verify: DIF not generated, APPTAG check ...[2024-04-27 00:32:09.485553] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:35.977 [2024-04-27 00:32:09.485633] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:35.977 [2024-04-27 00:32:09.485695] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:35.977 passed 00:12:35.977 Test: verify: DIF not generated, REFTAG check ...passed 00:12:35.977 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:35.977 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-27 00:32:09.485762] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:35.977 [2024-04-27 00:32:09.485815] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:35.977 passed 00:12:35.977 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-04-27 00:32:09.485945] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:35.977 passed 00:12:35.977 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:35.977 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:35.977 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-27 00:32:09.486226] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:35.977 passed 00:12:35.977 Test: generate copy: DIF generated, GUARD check ...passed 00:12:35.977 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:35.977 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:35.977 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:35.977 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:35.977 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:35.977 Test: generate copy: iovecs-len validate ...[2024-04-27 00:32:09.486680] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:35.977 passed 00:12:35.977 Test: generate copy: buffer alignment validate ...passed 00:12:35.977 00:12:35.977 Run Summary: Type Total Ran Passed Failed Inactive 00:12:35.977 suites 1 1 n/a 0 0 00:12:35.977 tests 20 20 20 0 0 00:12:35.977 asserts 204 204 204 0 n/a 00:12:35.977 00:12:35.977 Elapsed time = 0.005 seconds 00:12:37.350 00:12:37.350 real 0m1.753s 00:12:37.350 user 0m3.320s 00:12:37.350 sys 0m0.260s 00:12:37.350 00:32:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:37.350 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 ************************************ 00:12:37.350 END TEST accel_dif_functional_tests 00:12:37.350 ************************************ 00:12:37.350 ************************************ 00:12:37.350 END TEST accel 00:12:37.350 ************************************ 00:12:37.350 00:12:37.350 real 1m0.107s 00:12:37.350 user 1m4.770s 00:12:37.350 sys 0m6.302s 00:12:37.350 00:32:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:37.350 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 00:32:10 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:37.350 00:32:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:37.350 00:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.350 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 ************************************ 00:12:37.350 START TEST accel_rpc 00:12:37.350 ************************************ 00:12:37.350 00:32:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:37.350 * Looking for test storage... 00:12:37.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:37.350 00:32:10 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:37.350 00:32:10 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=115530 00:12:37.350 00:32:10 -- accel/accel_rpc.sh@15 -- # waitforlisten 115530 00:12:37.350 00:32:10 -- common/autotest_common.sh@817 -- # '[' -z 115530 ']' 00:12:37.350 00:32:10 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:37.350 00:32:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.350 00:32:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:37.350 00:32:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.350 00:32:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:37.350 00:32:10 -- common/autotest_common.sh@10 -- # set +x 00:12:37.350 [2024-04-27 00:32:10.772179] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:37.350 [2024-04-27 00:32:10.772599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115530 ] 00:12:37.350 [2024-04-27 00:32:10.928755] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.613 [2024-04-27 00:32:11.113116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.188 00:32:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.188 00:32:11 -- common/autotest_common.sh@850 -- # return 0 00:12:38.188 00:32:11 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:38.188 00:32:11 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:38.188 00:32:11 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:38.188 00:32:11 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:38.188 00:32:11 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:38.188 00:32:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:38.188 00:32:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:38.188 00:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.447 ************************************ 00:12:38.447 START TEST accel_assign_opcode 00:12:38.447 ************************************ 00:12:38.447 00:32:11 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:12:38.447 00:32:11 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:38.447 00:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.447 00:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.447 [2024-04-27 00:32:11.782274] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:38.447 00:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.447 00:32:11 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:38.447 00:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.447 00:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:38.447 [2024-04-27 00:32:11.790247] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:38.447 00:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.447 00:32:11 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:38.447 00:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.447 00:32:11 -- common/autotest_common.sh@10 -- # set +x 00:12:39.018 00:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.018 00:32:12 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:39.018 00:32:12 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:39.018 00:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.018 00:32:12 -- accel/accel_rpc.sh@42 -- # grep software 00:12:39.018 00:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:39.018 00:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.018 software 00:12:39.018 ************************************ 00:12:39.018 END TEST accel_assign_opcode 00:12:39.018 ************************************ 00:12:39.018 00:12:39.018 real 0m0.752s 00:12:39.018 user 0m0.053s 00:12:39.018 sys 0m0.010s 00:12:39.018 00:32:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:39.018 00:32:12 -- common/autotest_common.sh@10 -- # set +x 00:12:39.018 00:32:12 -- accel/accel_rpc.sh@55 -- # killprocess 115530 00:12:39.018 00:32:12 -- common/autotest_common.sh@936 -- # '[' -z 115530 ']' 00:12:39.018 00:32:12 -- common/autotest_common.sh@940 -- # kill -0 115530 00:12:39.018 00:32:12 -- common/autotest_common.sh@941 -- # uname 00:12:39.018 00:32:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:39.018 00:32:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115530 00:12:39.018 killing process with pid 115530 00:12:39.018 00:32:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:39.018 00:32:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:39.018 00:32:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115530' 00:12:39.018 00:32:12 -- common/autotest_common.sh@955 -- # kill 115530 00:12:39.018 00:32:12 -- common/autotest_common.sh@960 -- # wait 115530 00:12:40.922 00:12:40.922 real 0m3.820s 00:12:40.922 user 0m3.938s 00:12:40.922 sys 0m0.459s 00:12:40.922 00:32:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:40.922 00:32:14 -- common/autotest_common.sh@10 -- # set +x 00:12:40.922 ************************************ 00:12:40.922 END TEST accel_rpc 00:12:40.922 ************************************ 00:12:40.922 00:32:14 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:40.922 00:32:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:40.922 00:32:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.922 00:32:14 -- common/autotest_common.sh@10 -- # set +x 00:12:41.181 ************************************ 00:12:41.181 START TEST app_cmdline 00:12:41.181 ************************************ 00:12:41.181 00:32:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:41.181 * Looking for test storage... 00:12:41.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:41.181 00:32:14 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:41.181 00:32:14 -- app/cmdline.sh@17 -- # spdk_tgt_pid=115676 00:12:41.181 00:32:14 -- app/cmdline.sh@18 -- # waitforlisten 115676 00:12:41.181 00:32:14 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:41.181 00:32:14 -- common/autotest_common.sh@817 -- # '[' -z 115676 ']' 00:12:41.181 00:32:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.181 00:32:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:41.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.181 00:32:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.181 00:32:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:41.182 00:32:14 -- common/autotest_common.sh@10 -- # set +x 00:12:41.182 [2024-04-27 00:32:14.688976] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:41.182 [2024-04-27 00:32:14.689190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115676 ] 00:12:41.440 [2024-04-27 00:32:14.856784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.699 [2024-04-27 00:32:15.042791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.267 00:32:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:42.267 00:32:15 -- common/autotest_common.sh@850 -- # return 0 00:12:42.267 00:32:15 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:42.526 { 00:12:42.526 "version": "SPDK v24.05-pre git sha1 6651b13f7", 00:12:42.526 "fields": { 00:12:42.526 "major": 24, 00:12:42.526 "minor": 5, 00:12:42.526 "patch": 0, 00:12:42.526 "suffix": "-pre", 00:12:42.526 "commit": "6651b13f7" 00:12:42.526 } 00:12:42.526 } 00:12:42.526 00:32:15 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:42.526 00:32:15 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:42.526 00:32:15 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:42.526 00:32:15 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:42.526 00:32:15 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:42.526 00:32:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.526 00:32:15 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:42.526 00:32:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.526 00:32:15 -- app/cmdline.sh@26 -- # sort 00:12:42.526 00:32:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.526 00:32:16 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:42.526 00:32:16 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:42.526 00:32:16 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:42.526 00:32:16 -- common/autotest_common.sh@638 -- # local es=0 00:12:42.526 00:32:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:42.526 00:32:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:42.526 00:32:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:42.526 00:32:16 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:42.526 00:32:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:42.526 00:32:16 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:42.526 00:32:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:42.526 00:32:16 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:42.526 00:32:16 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:42.526 00:32:16 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:42.785 request: 00:12:42.785 { 00:12:42.785 "method": "env_dpdk_get_mem_stats", 00:12:42.785 "req_id": 1 00:12:42.785 } 00:12:42.785 Got JSON-RPC error response 00:12:42.785 response: 00:12:42.785 { 00:12:42.785 "code": -32601, 00:12:42.785 "message": "Method not found" 00:12:42.785 } 00:12:42.785 00:32:16 -- common/autotest_common.sh@641 -- # es=1 00:12:42.785 00:32:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:42.785 00:32:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:42.785 00:32:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:42.785 00:32:16 -- app/cmdline.sh@1 -- # killprocess 115676 00:12:42.785 00:32:16 -- common/autotest_common.sh@936 -- # '[' -z 115676 ']' 00:12:42.785 00:32:16 -- common/autotest_common.sh@940 -- # kill -0 115676 00:12:42.785 00:32:16 -- common/autotest_common.sh@941 -- # uname 00:12:42.785 00:32:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:42.785 00:32:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115676 00:12:42.785 00:32:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:42.785 00:32:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:42.785 killing process with pid 115676 00:12:42.785 00:32:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115676' 00:12:42.785 00:32:16 -- common/autotest_common.sh@955 -- # kill 115676 00:12:42.785 00:32:16 -- common/autotest_common.sh@960 -- # wait 115676 00:12:44.690 00:12:44.690 real 0m3.652s 00:12:44.690 user 0m4.049s 00:12:44.690 sys 0m0.625s 00:12:44.690 00:32:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.690 00:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:44.690 ************************************ 00:12:44.690 END TEST app_cmdline 00:12:44.690 ************************************ 00:12:44.690 00:32:18 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:44.690 00:32:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:44.690 00:32:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.690 00:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:44.690 ************************************ 00:12:44.690 START TEST version 00:12:44.690 ************************************ 00:12:44.690 00:32:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:44.949 * Looking for test storage... 00:12:44.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:44.949 00:32:18 -- app/version.sh@17 -- # get_header_version major 00:12:44.949 00:32:18 -- app/version.sh@14 -- # cut -f2 00:12:44.949 00:32:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:44.949 00:32:18 -- app/version.sh@14 -- # tr -d '"' 00:12:44.949 00:32:18 -- app/version.sh@17 -- # major=24 00:12:44.949 00:32:18 -- app/version.sh@18 -- # get_header_version minor 00:12:44.949 00:32:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:44.949 00:32:18 -- app/version.sh@14 -- # cut -f2 00:12:44.949 00:32:18 -- app/version.sh@14 -- # tr -d '"' 00:12:44.949 00:32:18 -- app/version.sh@18 -- # minor=5 00:12:44.949 00:32:18 -- app/version.sh@19 -- # get_header_version patch 00:12:44.949 00:32:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:44.949 00:32:18 -- app/version.sh@14 -- # cut -f2 00:12:44.949 00:32:18 -- app/version.sh@14 -- # tr -d '"' 00:12:44.949 00:32:18 -- app/version.sh@19 -- # patch=0 00:12:44.949 00:32:18 -- app/version.sh@20 -- # get_header_version suffix 00:12:44.949 00:32:18 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:44.949 00:32:18 -- app/version.sh@14 -- # cut -f2 00:12:44.949 00:32:18 -- app/version.sh@14 -- # tr -d '"' 00:12:44.949 00:32:18 -- app/version.sh@20 -- # suffix=-pre 00:12:44.949 00:32:18 -- app/version.sh@22 -- # version=24.5 00:12:44.949 00:32:18 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:44.949 00:32:18 -- app/version.sh@28 -- # version=24.5rc0 00:12:44.949 00:32:18 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:44.949 00:32:18 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:44.949 00:32:18 -- app/version.sh@30 -- # py_version=24.5rc0 00:12:44.949 00:32:18 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:12:44.949 00:12:44.949 real 0m0.137s 00:12:44.949 user 0m0.090s 00:12:44.949 sys 0m0.082s 00:12:44.949 00:32:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.949 00:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:44.949 ************************************ 00:12:44.949 END TEST version 00:12:44.949 ************************************ 00:12:44.949 00:32:18 -- spdk/autotest.sh@184 -- # '[' 1 -eq 1 ']' 00:12:44.949 00:32:18 -- spdk/autotest.sh@185 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:44.949 00:32:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:44.949 00:32:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.949 00:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:44.949 ************************************ 00:12:44.950 START TEST blockdev_general 00:12:44.950 ************************************ 00:12:44.950 00:32:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:45.208 * Looking for test storage... 00:12:45.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:45.208 00:32:18 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:45.208 00:32:18 -- bdev/nbd_common.sh@6 -- # set -e 00:12:45.208 00:32:18 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:45.208 00:32:18 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:45.208 00:32:18 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:45.208 00:32:18 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:45.208 00:32:18 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:45.208 00:32:18 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:45.208 00:32:18 -- bdev/blockdev.sh@20 -- # : 00:12:45.208 00:32:18 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:12:45.208 00:32:18 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:12:45.208 00:32:18 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:12:45.208 00:32:18 -- bdev/blockdev.sh@674 -- # uname -s 00:12:45.208 00:32:18 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:12:45.208 00:32:18 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:12:45.208 00:32:18 -- bdev/blockdev.sh@682 -- # test_type=bdev 00:12:45.208 00:32:18 -- bdev/blockdev.sh@683 -- # crypto_device= 00:12:45.208 00:32:18 -- bdev/blockdev.sh@684 -- # dek= 00:12:45.208 00:32:18 -- bdev/blockdev.sh@685 -- # env_ctx= 00:12:45.208 00:32:18 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:12:45.208 00:32:18 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:12:45.208 00:32:18 -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:12:45.208 00:32:18 -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:12:45.208 00:32:18 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:12:45.208 00:32:18 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=115855 00:12:45.208 00:32:18 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:45.208 00:32:18 -- bdev/blockdev.sh@49 -- # waitforlisten 115855 00:12:45.208 00:32:18 -- common/autotest_common.sh@817 -- # '[' -z 115855 ']' 00:12:45.208 00:32:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.208 00:32:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:45.208 00:32:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.208 00:32:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:45.208 00:32:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.208 00:32:18 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:45.208 [2024-04-27 00:32:18.662836] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:45.208 [2024-04-27 00:32:18.663041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115855 ] 00:12:45.467 [2024-04-27 00:32:18.834159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.467 [2024-04-27 00:32:19.014999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.403 00:32:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:46.403 00:32:19 -- common/autotest_common.sh@850 -- # return 0 00:12:46.403 00:32:19 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:12:46.403 00:32:19 -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:12:46.403 00:32:19 -- bdev/blockdev.sh@53 -- # rpc_cmd 00:12:46.403 00:32:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.403 00:32:19 -- common/autotest_common.sh@10 -- # set +x 00:12:46.980 [2024-04-27 00:32:20.325841] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.980 [2024-04-27 00:32:20.325955] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.980 00:12:46.980 [2024-04-27 00:32:20.333797] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.980 [2024-04-27 00:32:20.333904] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.980 00:12:46.980 Malloc0 00:12:46.980 Malloc1 00:12:46.980 Malloc2 00:12:46.980 Malloc3 00:12:46.980 Malloc4 00:12:47.238 Malloc5 00:12:47.238 Malloc6 00:12:47.238 Malloc7 00:12:47.238 Malloc8 00:12:47.238 Malloc9 00:12:47.238 [2024-04-27 00:32:20.711172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:47.238 [2024-04-27 00:32:20.711279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.238 [2024-04-27 00:32:20.711318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:47.238 [2024-04-27 00:32:20.711348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.238 [2024-04-27 00:32:20.713674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.238 [2024-04-27 00:32:20.713772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:47.238 TestPT 00:12:47.238 00:32:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.238 00:32:20 -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:47.238 5000+0 records in 00:12:47.238 5000+0 records out 00:12:47.238 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0281453 s, 364 MB/s 00:12:47.238 00:32:20 -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:47.238 00:32:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.238 00:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 AIO0 00:12:47.499 00:32:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.499 00:32:20 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:12:47.499 00:32:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.499 00:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 00:32:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.499 00:32:20 -- bdev/blockdev.sh@740 -- # cat 00:12:47.499 00:32:20 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:12:47.499 00:32:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.499 00:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 00:32:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.499 00:32:20 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:12:47.499 00:32:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.499 00:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 00:32:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.499 00:32:20 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:47.499 00:32:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.499 00:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 00:32:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.499 00:32:20 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:12:47.499 00:32:20 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:12:47.499 00:32:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:47.499 00:32:20 -- common/autotest_common.sh@10 -- # set +x 00:12:47.499 00:32:20 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:12:47.499 00:32:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:47.499 00:32:20 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:12:47.499 00:32:20 -- bdev/blockdev.sh@749 -- # jq -r .name 00:12:47.500 00:32:20 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "662d598b-583b-4f15-aa40-184ea4bae8da"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "662d598b-583b-4f15-aa40-184ea4bae8da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "fc3495bf-5c14-57cd-88b3-a9185e571411"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fc3495bf-5c14-57cd-88b3-a9185e571411",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "09b8a3b6-21b6-5b50-8413-52c8031c05c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "09b8a3b6-21b6-5b50-8413-52c8031c05c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "26fe7ac1-4761-5d28-a833-ebfc874ecd09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "26fe7ac1-4761-5d28-a833-ebfc874ecd09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b31a8e02-2df0-5c3c-9486-ac4ffafe91da"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b31a8e02-2df0-5c3c-9486-ac4ffafe91da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1168c312-d62e-5500-9d58-0e8bcc03c3be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1168c312-d62e-5500-9d58-0e8bcc03c3be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f0ac4d95-df5e-5412-afb9-e4d5f8370e5e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f0ac4d95-df5e-5412-afb9-e4d5f8370e5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "741603e3-8fd9-5732-8248-f78d92f1b7bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "741603e3-8fd9-5732-8248-f78d92f1b7bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "a60fd290-d0b0-5e7a-93c3-5ac75a6011de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a60fd290-d0b0-5e7a-93c3-5ac75a6011de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "137ec376-6fd6-55b9-af28-97c73dbf35a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "137ec376-6fd6-55b9-af28-97c73dbf35a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "7f7d1238-7c6e-5e18-a9fd-8b709a524ec0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f7d1238-7c6e-5e18-a9fd-8b709a524ec0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1d7ddd13-136d-5149-8a73-a176d0b4e248"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1d7ddd13-136d-5149-8a73-a176d0b4e248",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d9d25a32-edf3-4e80-9c80-cf8f8682650e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d9d25a32-edf3-4e80-9c80-cf8f8682650e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d9d25a32-edf3-4e80-9c80-cf8f8682650e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0982c67a-94f0-407e-b628-ecc07df8d653",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2d34cf55-c54f-4210-9fe2-306b31ff9b07",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4662b4c1-0c34-4197-aec1-f270f607bc28"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4662b4c1-0c34-4197-aec1-f270f607bc28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4662b4c1-0c34-4197-aec1-f270f607bc28",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7a00da65-f4bc-4164-8dde-43a5bf114cb2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "8c16e05d-e931-4d57-96ce-b9bb3c389048",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b204c3a3-8b5d-4742-a76c-62dce670ce88"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b204c3a3-8b5d-4742-a76c-62dce670ce88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b204c3a3-8b5d-4742-a76c-62dce670ce88",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "85e854c9-9cab-441d-b692-c0d284923b12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "216419db-81fa-4260-a928-f46716e168ac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "91d28b83-2ecc-431f-918b-145bd42b24c3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "91d28b83-2ecc-431f-918b-145bd42b24c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:47.500 00:32:21 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:12:47.500 00:32:21 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:12:47.500 00:32:21 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:12:47.500 00:32:21 -- bdev/blockdev.sh@754 -- # killprocess 115855 00:12:47.500 00:32:21 -- common/autotest_common.sh@936 -- # '[' -z 115855 ']' 00:12:47.500 00:32:21 -- common/autotest_common.sh@940 -- # kill -0 115855 00:12:47.500 00:32:21 -- common/autotest_common.sh@941 -- # uname 00:12:47.500 00:32:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.500 00:32:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 115855 00:12:47.500 00:32:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:47.500 00:32:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:47.500 killing process with pid 115855 00:12:47.500 00:32:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 115855' 00:12:47.500 00:32:21 -- common/autotest_common.sh@955 -- # kill 115855 00:12:47.500 00:32:21 -- common/autotest_common.sh@960 -- # wait 115855 00:12:50.035 00:32:23 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:50.035 00:32:23 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:50.035 00:32:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:50.035 00:32:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.035 00:32:23 -- common/autotest_common.sh@10 -- # set +x 00:12:50.293 ************************************ 00:12:50.293 START TEST bdev_hello_world 00:12:50.293 ************************************ 00:12:50.293 00:32:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:50.293 [2024-04-27 00:32:23.721123] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:50.293 [2024-04-27 00:32:23.721313] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115949 ] 00:12:50.552 [2024-04-27 00:32:23.888240] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.552 [2024-04-27 00:32:24.058187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.121 [2024-04-27 00:32:24.402643] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:51.121 [2024-04-27 00:32:24.402793] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:51.121 [2024-04-27 00:32:24.410603] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:51.121 [2024-04-27 00:32:24.410706] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:51.121 [2024-04-27 00:32:24.418629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:51.121 [2024-04-27 00:32:24.418699] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:51.121 [2024-04-27 00:32:24.418744] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:51.121 [2024-04-27 00:32:24.593873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:51.121 [2024-04-27 00:32:24.594049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.121 [2024-04-27 00:32:24.594102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:51.121 [2024-04-27 00:32:24.594135] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.121 [2024-04-27 00:32:24.596875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.121 [2024-04-27 00:32:24.596951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:51.381 [2024-04-27 00:32:24.887107] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:51.381 [2024-04-27 00:32:24.887215] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:51.381 [2024-04-27 00:32:24.887301] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:51.381 [2024-04-27 00:32:24.887387] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:51.381 [2024-04-27 00:32:24.887524] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:51.381 [2024-04-27 00:32:24.887589] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:51.381 [2024-04-27 00:32:24.887696] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:51.381 00:12:51.381 [2024-04-27 00:32:24.887764] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:53.285 00:12:53.285 real 0m2.934s 00:12:53.285 user 0m2.390s 00:12:53.285 sys 0m0.396s 00:12:53.285 00:32:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:53.285 00:32:26 -- common/autotest_common.sh@10 -- # set +x 00:12:53.285 ************************************ 00:12:53.285 END TEST bdev_hello_world 00:12:53.285 ************************************ 00:12:53.285 00:32:26 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:12:53.285 00:32:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:53.285 00:32:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.285 00:32:26 -- common/autotest_common.sh@10 -- # set +x 00:12:53.285 ************************************ 00:12:53.285 START TEST bdev_bounds 00:12:53.285 ************************************ 00:12:53.285 00:32:26 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:12:53.285 00:32:26 -- bdev/blockdev.sh@290 -- # bdevio_pid=116010 00:12:53.285 00:32:26 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:53.285 00:32:26 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:53.285 00:32:26 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 116010' 00:12:53.285 Process bdevio pid: 116010 00:12:53.285 00:32:26 -- bdev/blockdev.sh@293 -- # waitforlisten 116010 00:12:53.285 00:32:26 -- common/autotest_common.sh@817 -- # '[' -z 116010 ']' 00:12:53.285 00:32:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.285 00:32:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:53.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.285 00:32:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.285 00:32:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:53.285 00:32:26 -- common/autotest_common.sh@10 -- # set +x 00:12:53.285 [2024-04-27 00:32:26.747662] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:53.285 [2024-04-27 00:32:26.747878] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116010 ] 00:12:53.544 [2024-04-27 00:32:26.927356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.544 [2024-04-27 00:32:27.104730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.544 [2024-04-27 00:32:27.104882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.544 [2024-04-27 00:32:27.104881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.119 [2024-04-27 00:32:27.460147] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:54.119 [2024-04-27 00:32:27.460273] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:54.119 [2024-04-27 00:32:27.468111] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:54.119 [2024-04-27 00:32:27.468196] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:54.119 [2024-04-27 00:32:27.476137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:54.119 [2024-04-27 00:32:27.476206] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:54.119 [2024-04-27 00:32:27.476230] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:54.119 [2024-04-27 00:32:27.673431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:54.119 [2024-04-27 00:32:27.673581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.119 [2024-04-27 00:32:27.673628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:54.119 [2024-04-27 00:32:27.673650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.119 [2024-04-27 00:32:27.676283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.119 [2024-04-27 00:32:27.676347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:54.686 00:32:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:54.686 00:32:28 -- common/autotest_common.sh@850 -- # return 0 00:12:54.686 00:32:28 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:54.686 I/O targets: 00:12:54.686 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:54.686 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:54.686 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:54.686 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:54.686 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:54.686 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:54.686 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:54.686 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:54.686 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:54.686 00:12:54.686 00:12:54.686 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.687 http://cunit.sourceforge.net/ 00:12:54.687 00:12:54.687 00:12:54.687 Suite: bdevio tests on: AIO0 00:12:54.687 Test: blockdev write read block ...passed 00:12:54.687 Test: blockdev write zeroes read block ...passed 00:12:54.687 Test: blockdev write zeroes read no split ...passed 00:12:54.687 Test: blockdev write zeroes read split ...passed 00:12:54.687 Test: blockdev write zeroes read split partial ...passed 00:12:54.687 Test: blockdev reset ...passed 00:12:54.687 Test: blockdev write read 8 blocks ...passed 00:12:54.687 Test: blockdev write read size > 128k ...passed 00:12:54.687 Test: blockdev write read invalid size ...passed 00:12:54.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.687 Test: blockdev write read max offset ...passed 00:12:54.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.687 Test: blockdev writev readv 8 blocks ...passed 00:12:54.687 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.687 Test: blockdev writev readv block ...passed 00:12:54.687 Test: blockdev writev readv size > 128k ...passed 00:12:54.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.687 Test: blockdev comparev and writev ...passed 00:12:54.687 Test: blockdev nvme passthru rw ...passed 00:12:54.687 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.687 Test: blockdev nvme admin passthru ...passed 00:12:54.687 Test: blockdev copy ...passed 00:12:54.687 Suite: bdevio tests on: raid1 00:12:54.687 Test: blockdev write read block ...passed 00:12:54.687 Test: blockdev write zeroes read block ...passed 00:12:54.687 Test: blockdev write zeroes read no split ...passed 00:12:54.687 Test: blockdev write zeroes read split ...passed 00:12:54.687 Test: blockdev write zeroes read split partial ...passed 00:12:54.687 Test: blockdev reset ...passed 00:12:54.687 Test: blockdev write read 8 blocks ...passed 00:12:54.687 Test: blockdev write read size > 128k ...passed 00:12:54.687 Test: blockdev write read invalid size ...passed 00:12:54.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.687 Test: blockdev write read max offset ...passed 00:12:54.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.687 Test: blockdev writev readv 8 blocks ...passed 00:12:54.687 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.687 Test: blockdev writev readv block ...passed 00:12:54.687 Test: blockdev writev readv size > 128k ...passed 00:12:54.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.687 Test: blockdev comparev and writev ...passed 00:12:54.687 Test: blockdev nvme passthru rw ...passed 00:12:54.687 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.687 Test: blockdev nvme admin passthru ...passed 00:12:54.687 Test: blockdev copy ...passed 00:12:54.687 Suite: bdevio tests on: concat0 00:12:54.687 Test: blockdev write read block ...passed 00:12:54.687 Test: blockdev write zeroes read block ...passed 00:12:54.687 Test: blockdev write zeroes read no split ...passed 00:12:54.946 Test: blockdev write zeroes read split ...passed 00:12:54.946 Test: blockdev write zeroes read split partial ...passed 00:12:54.946 Test: blockdev reset ...passed 00:12:54.946 Test: blockdev write read 8 blocks ...passed 00:12:54.946 Test: blockdev write read size > 128k ...passed 00:12:54.946 Test: blockdev write read invalid size ...passed 00:12:54.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.946 Test: blockdev write read max offset ...passed 00:12:54.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.946 Test: blockdev writev readv 8 blocks ...passed 00:12:54.946 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.946 Test: blockdev writev readv block ...passed 00:12:54.946 Test: blockdev writev readv size > 128k ...passed 00:12:54.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.946 Test: blockdev comparev and writev ...passed 00:12:54.946 Test: blockdev nvme passthru rw ...passed 00:12:54.946 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.946 Test: blockdev nvme admin passthru ...passed 00:12:54.946 Test: blockdev copy ...passed 00:12:54.946 Suite: bdevio tests on: raid0 00:12:54.946 Test: blockdev write read block ...passed 00:12:54.946 Test: blockdev write zeroes read block ...passed 00:12:54.946 Test: blockdev write zeroes read no split ...passed 00:12:54.946 Test: blockdev write zeroes read split ...passed 00:12:54.946 Test: blockdev write zeroes read split partial ...passed 00:12:54.946 Test: blockdev reset ...passed 00:12:54.946 Test: blockdev write read 8 blocks ...passed 00:12:54.946 Test: blockdev write read size > 128k ...passed 00:12:54.946 Test: blockdev write read invalid size ...passed 00:12:54.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.946 Test: blockdev write read max offset ...passed 00:12:54.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.946 Test: blockdev writev readv 8 blocks ...passed 00:12:54.946 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.946 Test: blockdev writev readv block ...passed 00:12:54.946 Test: blockdev writev readv size > 128k ...passed 00:12:54.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.946 Test: blockdev comparev and writev ...passed 00:12:54.946 Test: blockdev nvme passthru rw ...passed 00:12:54.946 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.946 Test: blockdev nvme admin passthru ...passed 00:12:54.946 Test: blockdev copy ...passed 00:12:54.946 Suite: bdevio tests on: TestPT 00:12:54.946 Test: blockdev write read block ...passed 00:12:54.946 Test: blockdev write zeroes read block ...passed 00:12:54.946 Test: blockdev write zeroes read no split ...passed 00:12:54.946 Test: blockdev write zeroes read split ...passed 00:12:54.946 Test: blockdev write zeroes read split partial ...passed 00:12:54.946 Test: blockdev reset ...passed 00:12:54.946 Test: blockdev write read 8 blocks ...passed 00:12:54.946 Test: blockdev write read size > 128k ...passed 00:12:54.946 Test: blockdev write read invalid size ...passed 00:12:54.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.946 Test: blockdev write read max offset ...passed 00:12:54.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.946 Test: blockdev writev readv 8 blocks ...passed 00:12:54.946 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.946 Test: blockdev writev readv block ...passed 00:12:54.946 Test: blockdev writev readv size > 128k ...passed 00:12:54.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.946 Test: blockdev comparev and writev ...passed 00:12:54.946 Test: blockdev nvme passthru rw ...passed 00:12:54.946 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.946 Test: blockdev nvme admin passthru ...passed 00:12:54.946 Test: blockdev copy ...passed 00:12:54.946 Suite: bdevio tests on: Malloc2p7 00:12:54.946 Test: blockdev write read block ...passed 00:12:54.946 Test: blockdev write zeroes read block ...passed 00:12:54.946 Test: blockdev write zeroes read no split ...passed 00:12:54.946 Test: blockdev write zeroes read split ...passed 00:12:54.946 Test: blockdev write zeroes read split partial ...passed 00:12:54.946 Test: blockdev reset ...passed 00:12:54.946 Test: blockdev write read 8 blocks ...passed 00:12:54.946 Test: blockdev write read size > 128k ...passed 00:12:54.946 Test: blockdev write read invalid size ...passed 00:12:54.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:54.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:54.946 Test: blockdev write read max offset ...passed 00:12:54.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:54.946 Test: blockdev writev readv 8 blocks ...passed 00:12:54.946 Test: blockdev writev readv 30 x 1block ...passed 00:12:54.946 Test: blockdev writev readv block ...passed 00:12:54.946 Test: blockdev writev readv size > 128k ...passed 00:12:54.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:54.946 Test: blockdev comparev and writev ...passed 00:12:54.946 Test: blockdev nvme passthru rw ...passed 00:12:54.946 Test: blockdev nvme passthru vendor specific ...passed 00:12:54.946 Test: blockdev nvme admin passthru ...passed 00:12:54.946 Test: blockdev copy ...passed 00:12:54.946 Suite: bdevio tests on: Malloc2p6 00:12:54.946 Test: blockdev write read block ...passed 00:12:54.946 Test: blockdev write zeroes read block ...passed 00:12:54.946 Test: blockdev write zeroes read no split ...passed 00:12:54.946 Test: blockdev write zeroes read split ...passed 00:12:55.205 Test: blockdev write zeroes read split partial ...passed 00:12:55.205 Test: blockdev reset ...passed 00:12:55.205 Test: blockdev write read 8 blocks ...passed 00:12:55.205 Test: blockdev write read size > 128k ...passed 00:12:55.205 Test: blockdev write read invalid size ...passed 00:12:55.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.205 Test: blockdev write read max offset ...passed 00:12:55.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.205 Test: blockdev writev readv 8 blocks ...passed 00:12:55.205 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.205 Test: blockdev writev readv block ...passed 00:12:55.205 Test: blockdev writev readv size > 128k ...passed 00:12:55.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.205 Test: blockdev comparev and writev ...passed 00:12:55.205 Test: blockdev nvme passthru rw ...passed 00:12:55.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.205 Test: blockdev nvme admin passthru ...passed 00:12:55.205 Test: blockdev copy ...passed 00:12:55.205 Suite: bdevio tests on: Malloc2p5 00:12:55.205 Test: blockdev write read block ...passed 00:12:55.205 Test: blockdev write zeroes read block ...passed 00:12:55.205 Test: blockdev write zeroes read no split ...passed 00:12:55.205 Test: blockdev write zeroes read split ...passed 00:12:55.205 Test: blockdev write zeroes read split partial ...passed 00:12:55.205 Test: blockdev reset ...passed 00:12:55.205 Test: blockdev write read 8 blocks ...passed 00:12:55.205 Test: blockdev write read size > 128k ...passed 00:12:55.205 Test: blockdev write read invalid size ...passed 00:12:55.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.205 Test: blockdev write read max offset ...passed 00:12:55.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.205 Test: blockdev writev readv 8 blocks ...passed 00:12:55.205 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.205 Test: blockdev writev readv block ...passed 00:12:55.205 Test: blockdev writev readv size > 128k ...passed 00:12:55.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.205 Test: blockdev comparev and writev ...passed 00:12:55.205 Test: blockdev nvme passthru rw ...passed 00:12:55.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.205 Test: blockdev nvme admin passthru ...passed 00:12:55.205 Test: blockdev copy ...passed 00:12:55.205 Suite: bdevio tests on: Malloc2p4 00:12:55.205 Test: blockdev write read block ...passed 00:12:55.205 Test: blockdev write zeroes read block ...passed 00:12:55.205 Test: blockdev write zeroes read no split ...passed 00:12:55.205 Test: blockdev write zeroes read split ...passed 00:12:55.205 Test: blockdev write zeroes read split partial ...passed 00:12:55.205 Test: blockdev reset ...passed 00:12:55.205 Test: blockdev write read 8 blocks ...passed 00:12:55.205 Test: blockdev write read size > 128k ...passed 00:12:55.205 Test: blockdev write read invalid size ...passed 00:12:55.205 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.205 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.205 Test: blockdev write read max offset ...passed 00:12:55.205 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.205 Test: blockdev writev readv 8 blocks ...passed 00:12:55.205 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.205 Test: blockdev writev readv block ...passed 00:12:55.205 Test: blockdev writev readv size > 128k ...passed 00:12:55.205 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.205 Test: blockdev comparev and writev ...passed 00:12:55.205 Test: blockdev nvme passthru rw ...passed 00:12:55.205 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.206 Test: blockdev nvme admin passthru ...passed 00:12:55.206 Test: blockdev copy ...passed 00:12:55.206 Suite: bdevio tests on: Malloc2p3 00:12:55.206 Test: blockdev write read block ...passed 00:12:55.206 Test: blockdev write zeroes read block ...passed 00:12:55.206 Test: blockdev write zeroes read no split ...passed 00:12:55.206 Test: blockdev write zeroes read split ...passed 00:12:55.206 Test: blockdev write zeroes read split partial ...passed 00:12:55.206 Test: blockdev reset ...passed 00:12:55.206 Test: blockdev write read 8 blocks ...passed 00:12:55.206 Test: blockdev write read size > 128k ...passed 00:12:55.206 Test: blockdev write read invalid size ...passed 00:12:55.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.206 Test: blockdev write read max offset ...passed 00:12:55.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.206 Test: blockdev writev readv 8 blocks ...passed 00:12:55.206 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.206 Test: blockdev writev readv block ...passed 00:12:55.206 Test: blockdev writev readv size > 128k ...passed 00:12:55.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.206 Test: blockdev comparev and writev ...passed 00:12:55.206 Test: blockdev nvme passthru rw ...passed 00:12:55.206 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.206 Test: blockdev nvme admin passthru ...passed 00:12:55.206 Test: blockdev copy ...passed 00:12:55.206 Suite: bdevio tests on: Malloc2p2 00:12:55.206 Test: blockdev write read block ...passed 00:12:55.206 Test: blockdev write zeroes read block ...passed 00:12:55.206 Test: blockdev write zeroes read no split ...passed 00:12:55.206 Test: blockdev write zeroes read split ...passed 00:12:55.206 Test: blockdev write zeroes read split partial ...passed 00:12:55.206 Test: blockdev reset ...passed 00:12:55.206 Test: blockdev write read 8 blocks ...passed 00:12:55.206 Test: blockdev write read size > 128k ...passed 00:12:55.206 Test: blockdev write read invalid size ...passed 00:12:55.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.206 Test: blockdev write read max offset ...passed 00:12:55.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.206 Test: blockdev writev readv 8 blocks ...passed 00:12:55.206 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.206 Test: blockdev writev readv block ...passed 00:12:55.206 Test: blockdev writev readv size > 128k ...passed 00:12:55.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.206 Test: blockdev comparev and writev ...passed 00:12:55.206 Test: blockdev nvme passthru rw ...passed 00:12:55.206 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.206 Test: blockdev nvme admin passthru ...passed 00:12:55.206 Test: blockdev copy ...passed 00:12:55.206 Suite: bdevio tests on: Malloc2p1 00:12:55.206 Test: blockdev write read block ...passed 00:12:55.206 Test: blockdev write zeroes read block ...passed 00:12:55.206 Test: blockdev write zeroes read no split ...passed 00:12:55.206 Test: blockdev write zeroes read split ...passed 00:12:55.206 Test: blockdev write zeroes read split partial ...passed 00:12:55.206 Test: blockdev reset ...passed 00:12:55.206 Test: blockdev write read 8 blocks ...passed 00:12:55.206 Test: blockdev write read size > 128k ...passed 00:12:55.206 Test: blockdev write read invalid size ...passed 00:12:55.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.206 Test: blockdev write read max offset ...passed 00:12:55.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.206 Test: blockdev writev readv 8 blocks ...passed 00:12:55.206 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.206 Test: blockdev writev readv block ...passed 00:12:55.206 Test: blockdev writev readv size > 128k ...passed 00:12:55.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.206 Test: blockdev comparev and writev ...passed 00:12:55.206 Test: blockdev nvme passthru rw ...passed 00:12:55.206 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.206 Test: blockdev nvme admin passthru ...passed 00:12:55.206 Test: blockdev copy ...passed 00:12:55.206 Suite: bdevio tests on: Malloc2p0 00:12:55.206 Test: blockdev write read block ...passed 00:12:55.206 Test: blockdev write zeroes read block ...passed 00:12:55.465 Test: blockdev write zeroes read no split ...passed 00:12:55.465 Test: blockdev write zeroes read split ...passed 00:12:55.465 Test: blockdev write zeroes read split partial ...passed 00:12:55.465 Test: blockdev reset ...passed 00:12:55.465 Test: blockdev write read 8 blocks ...passed 00:12:55.465 Test: blockdev write read size > 128k ...passed 00:12:55.465 Test: blockdev write read invalid size ...passed 00:12:55.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.465 Test: blockdev write read max offset ...passed 00:12:55.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.465 Test: blockdev writev readv 8 blocks ...passed 00:12:55.465 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.465 Test: blockdev writev readv block ...passed 00:12:55.465 Test: blockdev writev readv size > 128k ...passed 00:12:55.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.465 Test: blockdev comparev and writev ...passed 00:12:55.465 Test: blockdev nvme passthru rw ...passed 00:12:55.465 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.465 Test: blockdev nvme admin passthru ...passed 00:12:55.465 Test: blockdev copy ...passed 00:12:55.465 Suite: bdevio tests on: Malloc1p1 00:12:55.465 Test: blockdev write read block ...passed 00:12:55.465 Test: blockdev write zeroes read block ...passed 00:12:55.465 Test: blockdev write zeroes read no split ...passed 00:12:55.465 Test: blockdev write zeroes read split ...passed 00:12:55.465 Test: blockdev write zeroes read split partial ...passed 00:12:55.465 Test: blockdev reset ...passed 00:12:55.465 Test: blockdev write read 8 blocks ...passed 00:12:55.465 Test: blockdev write read size > 128k ...passed 00:12:55.465 Test: blockdev write read invalid size ...passed 00:12:55.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.465 Test: blockdev write read max offset ...passed 00:12:55.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.465 Test: blockdev writev readv 8 blocks ...passed 00:12:55.465 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.465 Test: blockdev writev readv block ...passed 00:12:55.465 Test: blockdev writev readv size > 128k ...passed 00:12:55.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.465 Test: blockdev comparev and writev ...passed 00:12:55.465 Test: blockdev nvme passthru rw ...passed 00:12:55.465 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.465 Test: blockdev nvme admin passthru ...passed 00:12:55.465 Test: blockdev copy ...passed 00:12:55.465 Suite: bdevio tests on: Malloc1p0 00:12:55.465 Test: blockdev write read block ...passed 00:12:55.465 Test: blockdev write zeroes read block ...passed 00:12:55.465 Test: blockdev write zeroes read no split ...passed 00:12:55.465 Test: blockdev write zeroes read split ...passed 00:12:55.465 Test: blockdev write zeroes read split partial ...passed 00:12:55.465 Test: blockdev reset ...passed 00:12:55.465 Test: blockdev write read 8 blocks ...passed 00:12:55.465 Test: blockdev write read size > 128k ...passed 00:12:55.465 Test: blockdev write read invalid size ...passed 00:12:55.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.465 Test: blockdev write read max offset ...passed 00:12:55.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.465 Test: blockdev writev readv 8 blocks ...passed 00:12:55.465 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.465 Test: blockdev writev readv block ...passed 00:12:55.465 Test: blockdev writev readv size > 128k ...passed 00:12:55.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.465 Test: blockdev comparev and writev ...passed 00:12:55.465 Test: blockdev nvme passthru rw ...passed 00:12:55.465 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.465 Test: blockdev nvme admin passthru ...passed 00:12:55.465 Test: blockdev copy ...passed 00:12:55.465 Suite: bdevio tests on: Malloc0 00:12:55.465 Test: blockdev write read block ...passed 00:12:55.465 Test: blockdev write zeroes read block ...passed 00:12:55.465 Test: blockdev write zeroes read no split ...passed 00:12:55.465 Test: blockdev write zeroes read split ...passed 00:12:55.465 Test: blockdev write zeroes read split partial ...passed 00:12:55.465 Test: blockdev reset ...passed 00:12:55.465 Test: blockdev write read 8 blocks ...passed 00:12:55.465 Test: blockdev write read size > 128k ...passed 00:12:55.465 Test: blockdev write read invalid size ...passed 00:12:55.465 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.465 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.465 Test: blockdev write read max offset ...passed 00:12:55.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:55.465 Test: blockdev writev readv 8 blocks ...passed 00:12:55.465 Test: blockdev writev readv 30 x 1block ...passed 00:12:55.465 Test: blockdev writev readv block ...passed 00:12:55.465 Test: blockdev writev readv size > 128k ...passed 00:12:55.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:55.465 Test: blockdev comparev and writev ...passed 00:12:55.465 Test: blockdev nvme passthru rw ...passed 00:12:55.465 Test: blockdev nvme passthru vendor specific ...passed 00:12:55.465 Test: blockdev nvme admin passthru ...passed 00:12:55.465 Test: blockdev copy ...passed 00:12:55.465 00:12:55.465 Run Summary: Type Total Ran Passed Failed Inactive 00:12:55.465 suites 16 16 n/a 0 0 00:12:55.465 tests 368 368 368 0 0 00:12:55.465 asserts 2224 2224 2224 0 n/a 00:12:55.465 00:12:55.465 Elapsed time = 2.426 seconds 00:12:55.465 0 00:12:55.465 00:32:29 -- bdev/blockdev.sh@295 -- # killprocess 116010 00:12:55.465 00:32:29 -- common/autotest_common.sh@936 -- # '[' -z 116010 ']' 00:12:55.465 00:32:29 -- common/autotest_common.sh@940 -- # kill -0 116010 00:12:55.465 00:32:29 -- common/autotest_common.sh@941 -- # uname 00:12:55.465 00:32:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:55.465 00:32:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116010 00:12:55.724 00:32:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:55.724 killing process with pid 116010 00:12:55.724 00:32:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:55.724 00:32:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116010' 00:12:55.724 00:32:29 -- common/autotest_common.sh@955 -- # kill 116010 00:12:55.724 00:32:29 -- common/autotest_common.sh@960 -- # wait 116010 00:12:57.101 00:32:30 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:12:57.101 00:12:57.101 real 0m3.953s 00:12:57.101 user 0m9.920s 00:12:57.101 sys 0m0.564s 00:12:57.101 ************************************ 00:12:57.101 END TEST bdev_bounds 00:12:57.101 ************************************ 00:12:57.101 00:32:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:57.101 00:32:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.101 00:32:30 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:57.101 00:32:30 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:57.101 00:32:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.101 00:32:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.359 ************************************ 00:12:57.359 START TEST bdev_nbd 00:12:57.359 ************************************ 00:12:57.359 00:32:30 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:57.359 00:32:30 -- bdev/blockdev.sh@300 -- # uname -s 00:12:57.359 00:32:30 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:12:57.359 00:32:30 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.359 00:32:30 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:57.359 00:32:30 -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:57.359 00:32:30 -- bdev/blockdev.sh@304 -- # local bdev_all 00:12:57.359 00:32:30 -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:12:57.359 00:32:30 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:12:57.359 00:32:30 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:57.359 00:32:30 -- bdev/blockdev.sh@311 -- # local nbd_all 00:12:57.359 00:32:30 -- bdev/blockdev.sh@312 -- # bdev_num=16 00:12:57.359 00:32:30 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:57.359 00:32:30 -- bdev/blockdev.sh@314 -- # local nbd_list 00:12:57.359 00:32:30 -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:57.359 00:32:30 -- bdev/blockdev.sh@315 -- # local bdev_list 00:12:57.359 00:32:30 -- bdev/blockdev.sh@318 -- # nbd_pid=116104 00:12:57.359 00:32:30 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:57.359 00:32:30 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:57.359 00:32:30 -- bdev/blockdev.sh@320 -- # waitforlisten 116104 /var/tmp/spdk-nbd.sock 00:12:57.359 00:32:30 -- common/autotest_common.sh@817 -- # '[' -z 116104 ']' 00:12:57.359 00:32:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:57.359 00:32:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:57.359 00:32:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:57.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:57.359 00:32:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:57.359 00:32:30 -- common/autotest_common.sh@10 -- # set +x 00:12:57.359 [2024-04-27 00:32:30.789661] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:12:57.359 [2024-04-27 00:32:30.790112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.617 [2024-04-27 00:32:30.960451] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.617 [2024-04-27 00:32:31.141749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.184 [2024-04-27 00:32:31.505748] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.184 [2024-04-27 00:32:31.506061] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.184 [2024-04-27 00:32:31.513699] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.184 [2024-04-27 00:32:31.513878] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.184 [2024-04-27 00:32:31.521726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.184 [2024-04-27 00:32:31.521894] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:58.184 [2024-04-27 00:32:31.522039] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:58.184 [2024-04-27 00:32:31.698095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.184 [2024-04-27 00:32:31.698502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.184 [2024-04-27 00:32:31.698662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:58.184 [2024-04-27 00:32:31.698791] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.184 [2024-04-27 00:32:31.701082] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.184 [2024-04-27 00:32:31.701269] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:58.443 00:32:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:58.443 00:32:32 -- common/autotest_common.sh@850 -- # return 0 00:12:58.443 00:32:32 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:58.443 00:32:32 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.443 00:32:32 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:58.443 00:32:32 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:58.443 00:32:32 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:58.443 00:32:32 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.443 00:32:32 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:58.444 00:32:32 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:58.444 00:32:32 -- bdev/nbd_common.sh@24 -- # local i 00:12:58.444 00:32:32 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:58.444 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:58.444 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:58.444 00:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:59.010 00:32:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:59.010 00:32:32 -- common/autotest_common.sh@855 -- # local i 00:12:59.010 00:32:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:59.010 00:32:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:59.010 00:32:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:59.010 00:32:32 -- common/autotest_common.sh@859 -- # break 00:12:59.010 00:32:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:59.010 00:32:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:59.010 00:32:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.010 1+0 records in 00:12:59.010 1+0 records out 00:12:59.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676168 s, 6.1 MB/s 00:12:59.010 00:32:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.010 00:32:32 -- common/autotest_common.sh@872 -- # size=4096 00:12:59.010 00:32:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.010 00:32:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:59.010 00:32:32 -- common/autotest_common.sh@875 -- # return 0 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:59.010 00:32:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:59.268 00:32:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:59.268 00:32:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:59.269 00:32:32 -- common/autotest_common.sh@855 -- # local i 00:12:59.269 00:32:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:59.269 00:32:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:59.269 00:32:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:59.269 00:32:32 -- common/autotest_common.sh@859 -- # break 00:12:59.269 00:32:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:59.269 00:32:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:59.269 00:32:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.269 1+0 records in 00:12:59.269 1+0 records out 00:12:59.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081749 s, 5.0 MB/s 00:12:59.269 00:32:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.269 00:32:32 -- common/autotest_common.sh@872 -- # size=4096 00:12:59.269 00:32:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.269 00:32:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:59.269 00:32:32 -- common/autotest_common.sh@875 -- # return 0 00:12:59.269 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.269 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:59.269 00:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:59.527 00:32:32 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:59.527 00:32:32 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:59.527 00:32:32 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:59.527 00:32:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:12:59.527 00:32:32 -- common/autotest_common.sh@855 -- # local i 00:12:59.527 00:32:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:59.527 00:32:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:59.527 00:32:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:12:59.527 00:32:32 -- common/autotest_common.sh@859 -- # break 00:12:59.527 00:32:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:59.527 00:32:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:59.527 00:32:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.527 1+0 records in 00:12:59.527 1+0 records out 00:12:59.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475349 s, 8.6 MB/s 00:12:59.527 00:32:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.527 00:32:32 -- common/autotest_common.sh@872 -- # size=4096 00:12:59.527 00:32:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.527 00:32:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:59.527 00:32:32 -- common/autotest_common.sh@875 -- # return 0 00:12:59.527 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.527 00:32:32 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:59.527 00:32:32 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:59.527 00:32:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:59.527 00:32:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:59.527 00:32:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:59.527 00:32:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:12:59.527 00:32:33 -- common/autotest_common.sh@855 -- # local i 00:12:59.527 00:32:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:59.527 00:32:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:59.527 00:32:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:12:59.527 00:32:33 -- common/autotest_common.sh@859 -- # break 00:12:59.527 00:32:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:59.527 00:32:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:59.527 00:32:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.527 1+0 records in 00:12:59.527 1+0 records out 00:12:59.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586591 s, 7.0 MB/s 00:12:59.785 00:32:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.785 00:32:33 -- common/autotest_common.sh@872 -- # size=4096 00:12:59.785 00:32:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.785 00:32:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:59.785 00:32:33 -- common/autotest_common.sh@875 -- # return 0 00:12:59.785 00:32:33 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.785 00:32:33 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:59.785 00:32:33 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:13:00.043 00:32:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:00.043 00:32:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:00.043 00:32:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:00.043 00:32:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:13:00.043 00:32:33 -- common/autotest_common.sh@855 -- # local i 00:13:00.043 00:32:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:00.043 00:32:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:00.043 00:32:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:13:00.043 00:32:33 -- common/autotest_common.sh@859 -- # break 00:13:00.043 00:32:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:00.043 00:32:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:00.043 00:32:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.043 1+0 records in 00:13:00.043 1+0 records out 00:13:00.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461154 s, 8.9 MB/s 00:13:00.043 00:32:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.043 00:32:33 -- common/autotest_common.sh@872 -- # size=4096 00:13:00.043 00:32:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.043 00:32:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:00.043 00:32:33 -- common/autotest_common.sh@875 -- # return 0 00:13:00.043 00:32:33 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.043 00:32:33 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:00.043 00:32:33 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:13:00.302 00:32:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:00.302 00:32:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:00.302 00:32:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:00.302 00:32:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:13:00.302 00:32:33 -- common/autotest_common.sh@855 -- # local i 00:13:00.302 00:32:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:00.302 00:32:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:00.302 00:32:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:13:00.302 00:32:33 -- common/autotest_common.sh@859 -- # break 00:13:00.302 00:32:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:00.302 00:32:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:00.302 00:32:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.302 1+0 records in 00:13:00.302 1+0 records out 00:13:00.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000928554 s, 4.4 MB/s 00:13:00.302 00:32:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.302 00:32:33 -- common/autotest_common.sh@872 -- # size=4096 00:13:00.302 00:32:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.302 00:32:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:00.302 00:32:33 -- common/autotest_common.sh@875 -- # return 0 00:13:00.302 00:32:33 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.302 00:32:33 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:00.302 00:32:33 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:13:00.560 00:32:33 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:13:00.560 00:32:33 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:13:00.560 00:32:33 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:13:00.560 00:32:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:13:00.560 00:32:33 -- common/autotest_common.sh@855 -- # local i 00:13:00.560 00:32:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:00.560 00:32:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:00.560 00:32:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:13:00.560 00:32:33 -- common/autotest_common.sh@859 -- # break 00:13:00.560 00:32:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:00.560 00:32:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:00.560 00:32:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.560 1+0 records in 00:13:00.560 1+0 records out 00:13:00.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748186 s, 5.5 MB/s 00:13:00.560 00:32:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.560 00:32:34 -- common/autotest_common.sh@872 -- # size=4096 00:13:00.560 00:32:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.560 00:32:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:00.560 00:32:34 -- common/autotest_common.sh@875 -- # return 0 00:13:00.560 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.560 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:00.560 00:32:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:13:00.818 00:32:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:13:00.818 00:32:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:13:00.818 00:32:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:13:00.818 00:32:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:13:00.818 00:32:34 -- common/autotest_common.sh@855 -- # local i 00:13:00.818 00:32:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:00.818 00:32:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:00.818 00:32:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:13:00.818 00:32:34 -- common/autotest_common.sh@859 -- # break 00:13:00.818 00:32:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:00.818 00:32:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:00.818 00:32:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.818 1+0 records in 00:13:00.818 1+0 records out 00:13:00.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599751 s, 6.8 MB/s 00:13:00.818 00:32:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.818 00:32:34 -- common/autotest_common.sh@872 -- # size=4096 00:13:00.818 00:32:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.818 00:32:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:00.818 00:32:34 -- common/autotest_common.sh@875 -- # return 0 00:13:00.818 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:00.818 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:00.818 00:32:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:13:01.077 00:32:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:13:01.077 00:32:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:13:01.077 00:32:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:13:01.077 00:32:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:13:01.077 00:32:34 -- common/autotest_common.sh@855 -- # local i 00:13:01.077 00:32:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:01.077 00:32:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:01.077 00:32:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:13:01.077 00:32:34 -- common/autotest_common.sh@859 -- # break 00:13:01.077 00:32:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:01.077 00:32:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:01.077 00:32:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.077 1+0 records in 00:13:01.077 1+0 records out 00:13:01.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045132 s, 9.1 MB/s 00:13:01.077 00:32:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.077 00:32:34 -- common/autotest_common.sh@872 -- # size=4096 00:13:01.077 00:32:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.077 00:32:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:01.077 00:32:34 -- common/autotest_common.sh@875 -- # return 0 00:13:01.077 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.077 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:01.077 00:32:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:13:01.336 00:32:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:13:01.336 00:32:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:13:01.336 00:32:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:13:01.336 00:32:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:13:01.336 00:32:34 -- common/autotest_common.sh@855 -- # local i 00:13:01.336 00:32:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:01.336 00:32:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:01.336 00:32:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:13:01.336 00:32:34 -- common/autotest_common.sh@859 -- # break 00:13:01.336 00:32:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:01.336 00:32:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:01.336 00:32:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.336 1+0 records in 00:13:01.336 1+0 records out 00:13:01.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978654 s, 4.2 MB/s 00:13:01.336 00:32:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.336 00:32:34 -- common/autotest_common.sh@872 -- # size=4096 00:13:01.336 00:32:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.336 00:32:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:01.336 00:32:34 -- common/autotest_common.sh@875 -- # return 0 00:13:01.336 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.336 00:32:34 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:01.336 00:32:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:13:01.595 00:32:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:13:01.595 00:32:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:13:01.595 00:32:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:13:01.595 00:32:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:13:01.595 00:32:35 -- common/autotest_common.sh@855 -- # local i 00:13:01.595 00:32:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:01.595 00:32:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:01.595 00:32:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:13:01.595 00:32:35 -- common/autotest_common.sh@859 -- # break 00:13:01.595 00:32:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:01.595 00:32:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:01.595 00:32:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.595 1+0 records in 00:13:01.595 1+0 records out 00:13:01.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974529 s, 4.2 MB/s 00:13:01.595 00:32:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.595 00:32:35 -- common/autotest_common.sh@872 -- # size=4096 00:13:01.595 00:32:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.854 00:32:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:01.854 00:32:35 -- common/autotest_common.sh@875 -- # return 0 00:13:01.854 00:32:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.854 00:32:35 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:01.854 00:32:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:13:01.854 00:32:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:13:01.854 00:32:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:13:01.854 00:32:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:13:01.854 00:32:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:13:01.854 00:32:35 -- common/autotest_common.sh@855 -- # local i 00:13:01.854 00:32:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:01.854 00:32:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:01.854 00:32:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:13:01.854 00:32:35 -- common/autotest_common.sh@859 -- # break 00:13:01.854 00:32:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:01.854 00:32:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:01.855 00:32:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.855 1+0 records in 00:13:01.855 1+0 records out 00:13:01.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677918 s, 6.0 MB/s 00:13:01.855 00:32:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.855 00:32:35 -- common/autotest_common.sh@872 -- # size=4096 00:13:01.855 00:32:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.855 00:32:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:01.855 00:32:35 -- common/autotest_common.sh@875 -- # return 0 00:13:01.855 00:32:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:01.855 00:32:35 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:01.855 00:32:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:13:02.130 00:32:35 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:13:02.130 00:32:35 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:13:02.130 00:32:35 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:13:02.130 00:32:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:13:02.130 00:32:35 -- common/autotest_common.sh@855 -- # local i 00:13:02.130 00:32:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:02.130 00:32:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:02.130 00:32:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:13:02.130 00:32:35 -- common/autotest_common.sh@859 -- # break 00:13:02.130 00:32:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:02.130 00:32:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:02.130 00:32:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.389 1+0 records in 00:13:02.389 1+0 records out 00:13:02.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000915042 s, 4.5 MB/s 00:13:02.389 00:32:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.389 00:32:35 -- common/autotest_common.sh@872 -- # size=4096 00:13:02.389 00:32:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.389 00:32:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:02.389 00:32:35 -- common/autotest_common.sh@875 -- # return 0 00:13:02.389 00:32:35 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.389 00:32:35 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:02.389 00:32:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:13:02.648 00:32:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:13:02.648 00:32:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:13:02.648 00:32:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:13:02.648 00:32:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:13:02.648 00:32:36 -- common/autotest_common.sh@855 -- # local i 00:13:02.648 00:32:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:02.648 00:32:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:02.648 00:32:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:13:02.648 00:32:36 -- common/autotest_common.sh@859 -- # break 00:13:02.648 00:32:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:02.648 00:32:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:02.648 00:32:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.648 1+0 records in 00:13:02.648 1+0 records out 00:13:02.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654487 s, 6.3 MB/s 00:13:02.648 00:32:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.648 00:32:36 -- common/autotest_common.sh@872 -- # size=4096 00:13:02.648 00:32:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.648 00:32:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:02.648 00:32:36 -- common/autotest_common.sh@875 -- # return 0 00:13:02.648 00:32:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.648 00:32:36 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:02.648 00:32:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:13:02.907 00:32:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:13:02.907 00:32:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:13:02.907 00:32:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:13:02.907 00:32:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:13:02.907 00:32:36 -- common/autotest_common.sh@855 -- # local i 00:13:02.907 00:32:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:02.907 00:32:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:02.907 00:32:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:13:02.907 00:32:36 -- common/autotest_common.sh@859 -- # break 00:13:02.907 00:32:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:02.907 00:32:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:02.907 00:32:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.907 1+0 records in 00:13:02.907 1+0 records out 00:13:02.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840317 s, 4.9 MB/s 00:13:02.907 00:32:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.907 00:32:36 -- common/autotest_common.sh@872 -- # size=4096 00:13:02.907 00:32:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.907 00:32:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:02.907 00:32:36 -- common/autotest_common.sh@875 -- # return 0 00:13:02.907 00:32:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:02.907 00:32:36 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:02.907 00:32:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:13:03.166 00:32:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:13:03.166 00:32:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:13:03.166 00:32:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:13:03.166 00:32:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:13:03.166 00:32:36 -- common/autotest_common.sh@855 -- # local i 00:13:03.166 00:32:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:03.166 00:32:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:03.166 00:32:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:13:03.166 00:32:36 -- common/autotest_common.sh@859 -- # break 00:13:03.166 00:32:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:03.166 00:32:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:03.166 00:32:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.166 1+0 records in 00:13:03.166 1+0 records out 00:13:03.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130451 s, 3.1 MB/s 00:13:03.166 00:32:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.166 00:32:36 -- common/autotest_common.sh@872 -- # size=4096 00:13:03.166 00:32:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.166 00:32:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:03.166 00:32:36 -- common/autotest_common.sh@875 -- # return 0 00:13:03.166 00:32:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:03.166 00:32:36 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:13:03.166 00:32:36 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:03.425 00:32:36 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd0", 00:13:03.425 "bdev_name": "Malloc0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd1", 00:13:03.425 "bdev_name": "Malloc1p0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd2", 00:13:03.425 "bdev_name": "Malloc1p1" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd3", 00:13:03.425 "bdev_name": "Malloc2p0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd4", 00:13:03.425 "bdev_name": "Malloc2p1" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd5", 00:13:03.425 "bdev_name": "Malloc2p2" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd6", 00:13:03.425 "bdev_name": "Malloc2p3" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd7", 00:13:03.425 "bdev_name": "Malloc2p4" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd8", 00:13:03.425 "bdev_name": "Malloc2p5" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd9", 00:13:03.425 "bdev_name": "Malloc2p6" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd10", 00:13:03.425 "bdev_name": "Malloc2p7" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd11", 00:13:03.425 "bdev_name": "TestPT" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd12", 00:13:03.425 "bdev_name": "raid0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd13", 00:13:03.425 "bdev_name": "concat0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd14", 00:13:03.425 "bdev_name": "raid1" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd15", 00:13:03.425 "bdev_name": "AIO0" 00:13:03.425 } 00:13:03.425 ]' 00:13:03.425 00:32:36 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:03.425 00:32:36 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:03.425 00:32:36 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd0", 00:13:03.425 "bdev_name": "Malloc0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd1", 00:13:03.425 "bdev_name": "Malloc1p0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd2", 00:13:03.425 "bdev_name": "Malloc1p1" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd3", 00:13:03.425 "bdev_name": "Malloc2p0" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd4", 00:13:03.425 "bdev_name": "Malloc2p1" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd5", 00:13:03.425 "bdev_name": "Malloc2p2" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd6", 00:13:03.425 "bdev_name": "Malloc2p3" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd7", 00:13:03.425 "bdev_name": "Malloc2p4" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd8", 00:13:03.425 "bdev_name": "Malloc2p5" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd9", 00:13:03.425 "bdev_name": "Malloc2p6" 00:13:03.425 }, 00:13:03.425 { 00:13:03.425 "nbd_device": "/dev/nbd10", 00:13:03.425 "bdev_name": "Malloc2p7" 00:13:03.425 }, 00:13:03.425 { 00:13:03.426 "nbd_device": "/dev/nbd11", 00:13:03.426 "bdev_name": "TestPT" 00:13:03.426 }, 00:13:03.426 { 00:13:03.426 "nbd_device": "/dev/nbd12", 00:13:03.426 "bdev_name": "raid0" 00:13:03.426 }, 00:13:03.426 { 00:13:03.426 "nbd_device": "/dev/nbd13", 00:13:03.426 "bdev_name": "concat0" 00:13:03.426 }, 00:13:03.426 { 00:13:03.426 "nbd_device": "/dev/nbd14", 00:13:03.426 "bdev_name": "raid1" 00:13:03.426 }, 00:13:03.426 { 00:13:03.426 "nbd_device": "/dev/nbd15", 00:13:03.426 "bdev_name": "AIO0" 00:13:03.426 } 00:13:03.426 ]' 00:13:03.426 00:32:36 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:13:03.426 00:32:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.426 00:32:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:13:03.426 00:32:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.426 00:32:36 -- bdev/nbd_common.sh@51 -- # local i 00:13:03.426 00:32:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.426 00:32:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@41 -- # break 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.684 00:32:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@41 -- # break 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.943 00:32:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@41 -- # break 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.202 00:32:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@41 -- # break 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.461 00:32:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@41 -- # break 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.720 00:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@41 -- # break 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.979 00:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@41 -- # break 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.238 00:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@41 -- # break 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.497 00:32:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@41 -- # break 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.755 00:32:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@41 -- # break 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.014 00:32:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@41 -- # break 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.273 00:32:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@41 -- # break 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.533 00:32:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@41 -- # break 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@41 -- # break 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.792 00:32:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@41 -- # break 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@41 -- # break 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.360 00:32:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@65 -- # true 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@65 -- # count=0 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@122 -- # count=0 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@127 -- # return 0 00:13:07.618 00:32:41 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:07.618 00:32:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@12 -- # local i 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:07.619 00:32:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:07.878 /dev/nbd0 00:13:07.878 00:32:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.878 00:32:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.878 00:32:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:07.878 00:32:41 -- common/autotest_common.sh@855 -- # local i 00:13:07.878 00:32:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:07.878 00:32:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:07.878 00:32:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:07.878 00:32:41 -- common/autotest_common.sh@859 -- # break 00:13:07.878 00:32:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:07.878 00:32:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:07.878 00:32:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.878 1+0 records in 00:13:07.878 1+0 records out 00:13:07.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055002 s, 7.4 MB/s 00:13:07.878 00:32:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.878 00:32:41 -- common/autotest_common.sh@872 -- # size=4096 00:13:07.878 00:32:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.878 00:32:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:07.878 00:32:41 -- common/autotest_common.sh@875 -- # return 0 00:13:07.878 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.878 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:07.878 00:32:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:13:08.137 /dev/nbd1 00:13:08.137 00:32:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:08.137 00:32:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:08.137 00:32:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:08.137 00:32:41 -- common/autotest_common.sh@855 -- # local i 00:13:08.137 00:32:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:08.137 00:32:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:08.137 00:32:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:08.137 00:32:41 -- common/autotest_common.sh@859 -- # break 00:13:08.137 00:32:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:08.137 00:32:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:08.137 00:32:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.137 1+0 records in 00:13:08.137 1+0 records out 00:13:08.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796122 s, 5.1 MB/s 00:13:08.137 00:32:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.137 00:32:41 -- common/autotest_common.sh@872 -- # size=4096 00:13:08.137 00:32:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.137 00:32:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:08.137 00:32:41 -- common/autotest_common.sh@875 -- # return 0 00:13:08.137 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.137 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:08.137 00:32:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:13:08.396 /dev/nbd10 00:13:08.396 00:32:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:08.396 00:32:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:08.655 00:32:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:13:08.655 00:32:41 -- common/autotest_common.sh@855 -- # local i 00:13:08.655 00:32:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:08.655 00:32:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:08.655 00:32:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:13:08.655 00:32:41 -- common/autotest_common.sh@859 -- # break 00:13:08.655 00:32:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:08.655 00:32:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:08.655 00:32:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.655 1+0 records in 00:13:08.655 1+0 records out 00:13:08.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417225 s, 9.8 MB/s 00:13:08.655 00:32:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.655 00:32:41 -- common/autotest_common.sh@872 -- # size=4096 00:13:08.655 00:32:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.655 00:32:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:08.655 00:32:41 -- common/autotest_common.sh@875 -- # return 0 00:13:08.655 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.655 00:32:41 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:08.655 00:32:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:13:08.655 /dev/nbd11 00:13:08.655 00:32:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:08.655 00:32:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:08.655 00:32:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:13:08.655 00:32:42 -- common/autotest_common.sh@855 -- # local i 00:13:08.655 00:32:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:08.655 00:32:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:08.655 00:32:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:13:08.655 00:32:42 -- common/autotest_common.sh@859 -- # break 00:13:08.655 00:32:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:08.655 00:32:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:08.655 00:32:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:08.655 1+0 records in 00:13:08.655 1+0 records out 00:13:08.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420334 s, 9.7 MB/s 00:13:08.655 00:32:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.655 00:32:42 -- common/autotest_common.sh@872 -- # size=4096 00:13:08.655 00:32:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:08.914 00:32:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:08.914 00:32:42 -- common/autotest_common.sh@875 -- # return 0 00:13:08.914 00:32:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:08.914 00:32:42 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:08.914 00:32:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:13:09.174 /dev/nbd12 00:13:09.174 00:32:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:09.174 00:32:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:09.174 00:32:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:13:09.174 00:32:42 -- common/autotest_common.sh@855 -- # local i 00:13:09.174 00:32:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:09.174 00:32:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:09.174 00:32:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:13:09.174 00:32:42 -- common/autotest_common.sh@859 -- # break 00:13:09.174 00:32:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:09.174 00:32:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:09.174 00:32:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.174 1+0 records in 00:13:09.174 1+0 records out 00:13:09.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406709 s, 10.1 MB/s 00:13:09.174 00:32:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.174 00:32:42 -- common/autotest_common.sh@872 -- # size=4096 00:13:09.174 00:32:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.174 00:32:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:09.174 00:32:42 -- common/autotest_common.sh@875 -- # return 0 00:13:09.174 00:32:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.174 00:32:42 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:09.174 00:32:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:13:09.174 /dev/nbd13 00:13:09.433 00:32:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:09.433 00:32:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:09.433 00:32:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:13:09.433 00:32:42 -- common/autotest_common.sh@855 -- # local i 00:13:09.433 00:32:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:09.433 00:32:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:09.433 00:32:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:13:09.433 00:32:42 -- common/autotest_common.sh@859 -- # break 00:13:09.433 00:32:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:09.433 00:32:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:09.433 00:32:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.433 1+0 records in 00:13:09.433 1+0 records out 00:13:09.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043744 s, 9.4 MB/s 00:13:09.433 00:32:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.433 00:32:42 -- common/autotest_common.sh@872 -- # size=4096 00:13:09.433 00:32:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.433 00:32:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:09.433 00:32:42 -- common/autotest_common.sh@875 -- # return 0 00:13:09.433 00:32:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.433 00:32:42 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:09.433 00:32:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:13:09.433 /dev/nbd14 00:13:09.433 00:32:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:13:09.433 00:32:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:13:09.433 00:32:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:13:09.433 00:32:43 -- common/autotest_common.sh@855 -- # local i 00:13:09.433 00:32:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:09.433 00:32:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:09.433 00:32:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:13:09.433 00:32:43 -- common/autotest_common.sh@859 -- # break 00:13:09.433 00:32:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:09.433 00:32:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:09.433 00:32:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.691 1+0 records in 00:13:09.691 1+0 records out 00:13:09.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465722 s, 8.8 MB/s 00:13:09.691 00:32:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.691 00:32:43 -- common/autotest_common.sh@872 -- # size=4096 00:13:09.691 00:32:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.691 00:32:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:09.691 00:32:43 -- common/autotest_common.sh@875 -- # return 0 00:13:09.691 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.691 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:09.691 00:32:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:13:09.950 /dev/nbd15 00:13:09.950 00:32:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:13:09.950 00:32:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:13:09.950 00:32:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:13:09.950 00:32:43 -- common/autotest_common.sh@855 -- # local i 00:13:09.950 00:32:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:09.950 00:32:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:09.950 00:32:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:13:09.950 00:32:43 -- common/autotest_common.sh@859 -- # break 00:13:09.950 00:32:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:09.950 00:32:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:09.950 00:32:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.950 1+0 records in 00:13:09.950 1+0 records out 00:13:09.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909384 s, 4.5 MB/s 00:13:09.950 00:32:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.950 00:32:43 -- common/autotest_common.sh@872 -- # size=4096 00:13:09.950 00:32:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.950 00:32:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:09.950 00:32:43 -- common/autotest_common.sh@875 -- # return 0 00:13:09.950 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.950 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:09.950 00:32:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:13:10.208 /dev/nbd2 00:13:10.208 00:32:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:13:10.208 00:32:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:13:10.208 00:32:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:13:10.208 00:32:43 -- common/autotest_common.sh@855 -- # local i 00:13:10.208 00:32:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:10.208 00:32:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:10.208 00:32:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:13:10.208 00:32:43 -- common/autotest_common.sh@859 -- # break 00:13:10.208 00:32:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:10.208 00:32:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:10.208 00:32:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.208 1+0 records in 00:13:10.208 1+0 records out 00:13:10.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663956 s, 6.2 MB/s 00:13:10.208 00:32:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.208 00:32:43 -- common/autotest_common.sh@872 -- # size=4096 00:13:10.208 00:32:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.208 00:32:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:10.208 00:32:43 -- common/autotest_common.sh@875 -- # return 0 00:13:10.208 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.208 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:10.208 00:32:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:13:10.467 /dev/nbd3 00:13:10.467 00:32:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:13:10.467 00:32:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:13:10.467 00:32:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:13:10.467 00:32:43 -- common/autotest_common.sh@855 -- # local i 00:13:10.467 00:32:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:10.467 00:32:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:10.467 00:32:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:13:10.467 00:32:43 -- common/autotest_common.sh@859 -- # break 00:13:10.467 00:32:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:10.467 00:32:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:10.467 00:32:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.467 1+0 records in 00:13:10.467 1+0 records out 00:13:10.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751329 s, 5.5 MB/s 00:13:10.467 00:32:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.467 00:32:43 -- common/autotest_common.sh@872 -- # size=4096 00:13:10.467 00:32:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.467 00:32:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:10.467 00:32:43 -- common/autotest_common.sh@875 -- # return 0 00:13:10.467 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.467 00:32:43 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:10.467 00:32:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:13:10.725 /dev/nbd4 00:13:10.725 00:32:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:13:10.725 00:32:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:13:10.725 00:32:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:13:10.725 00:32:44 -- common/autotest_common.sh@855 -- # local i 00:13:10.725 00:32:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:10.725 00:32:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:10.725 00:32:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:13:10.725 00:32:44 -- common/autotest_common.sh@859 -- # break 00:13:10.725 00:32:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:10.725 00:32:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:10.725 00:32:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.725 1+0 records in 00:13:10.725 1+0 records out 00:13:10.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633764 s, 6.5 MB/s 00:13:10.725 00:32:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.725 00:32:44 -- common/autotest_common.sh@872 -- # size=4096 00:13:10.725 00:32:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.725 00:32:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:10.725 00:32:44 -- common/autotest_common.sh@875 -- # return 0 00:13:10.725 00:32:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.725 00:32:44 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:10.725 00:32:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:13:10.983 /dev/nbd5 00:13:10.983 00:32:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:13:10.983 00:32:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:13:10.983 00:32:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:13:10.983 00:32:44 -- common/autotest_common.sh@855 -- # local i 00:13:10.983 00:32:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:10.983 00:32:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:10.983 00:32:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:13:10.983 00:32:44 -- common/autotest_common.sh@859 -- # break 00:13:10.983 00:32:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:10.983 00:32:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:10.983 00:32:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.983 1+0 records in 00:13:10.983 1+0 records out 00:13:10.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000863204 s, 4.7 MB/s 00:13:10.983 00:32:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.983 00:32:44 -- common/autotest_common.sh@872 -- # size=4096 00:13:10.983 00:32:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.983 00:32:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:10.983 00:32:44 -- common/autotest_common.sh@875 -- # return 0 00:13:10.983 00:32:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.983 00:32:44 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:10.983 00:32:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:13:11.241 /dev/nbd6 00:13:11.241 00:32:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:13:11.241 00:32:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:13:11.241 00:32:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:13:11.241 00:32:44 -- common/autotest_common.sh@855 -- # local i 00:13:11.241 00:32:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:11.241 00:32:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:11.241 00:32:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:13:11.241 00:32:44 -- common/autotest_common.sh@859 -- # break 00:13:11.241 00:32:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:11.241 00:32:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:11.241 00:32:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.241 1+0 records in 00:13:11.241 1+0 records out 00:13:11.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491656 s, 8.3 MB/s 00:13:11.241 00:32:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.241 00:32:44 -- common/autotest_common.sh@872 -- # size=4096 00:13:11.241 00:32:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.241 00:32:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:11.241 00:32:44 -- common/autotest_common.sh@875 -- # return 0 00:13:11.241 00:32:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.241 00:32:44 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:11.241 00:32:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:13:11.500 /dev/nbd7 00:13:11.500 00:32:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:13:11.500 00:32:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:13:11.500 00:32:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:13:11.500 00:32:45 -- common/autotest_common.sh@855 -- # local i 00:13:11.500 00:32:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:11.500 00:32:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:11.500 00:32:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:13:11.500 00:32:45 -- common/autotest_common.sh@859 -- # break 00:13:11.500 00:32:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:11.500 00:32:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:11.500 00:32:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.500 1+0 records in 00:13:11.500 1+0 records out 00:13:11.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547554 s, 7.5 MB/s 00:13:11.500 00:32:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.500 00:32:45 -- common/autotest_common.sh@872 -- # size=4096 00:13:11.500 00:32:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.500 00:32:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:11.500 00:32:45 -- common/autotest_common.sh@875 -- # return 0 00:13:11.500 00:32:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.500 00:32:45 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:11.500 00:32:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:13:11.759 /dev/nbd8 00:13:11.759 00:32:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:13:11.759 00:32:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:13:11.759 00:32:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:13:11.759 00:32:45 -- common/autotest_common.sh@855 -- # local i 00:13:11.759 00:32:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:11.759 00:32:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:11.759 00:32:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:13:11.759 00:32:45 -- common/autotest_common.sh@859 -- # break 00:13:11.759 00:32:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:11.759 00:32:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:11.759 00:32:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.759 1+0 records in 00:13:11.759 1+0 records out 00:13:11.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679636 s, 6.0 MB/s 00:13:11.759 00:32:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.759 00:32:45 -- common/autotest_common.sh@872 -- # size=4096 00:13:11.759 00:32:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.759 00:32:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:11.759 00:32:45 -- common/autotest_common.sh@875 -- # return 0 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:13:12.017 /dev/nbd9 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:13:12.017 00:32:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:13:12.017 00:32:45 -- common/autotest_common.sh@855 -- # local i 00:13:12.017 00:32:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:12.017 00:32:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:12.017 00:32:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:13:12.017 00:32:45 -- common/autotest_common.sh@859 -- # break 00:13:12.017 00:32:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:12.017 00:32:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:12.017 00:32:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.017 1+0 records in 00:13:12.017 1+0 records out 00:13:12.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808749 s, 5.1 MB/s 00:13:12.017 00:32:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.017 00:32:45 -- common/autotest_common.sh@872 -- # size=4096 00:13:12.017 00:32:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.017 00:32:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:12.017 00:32:45 -- common/autotest_common.sh@875 -- # return 0 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.017 00:32:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:12.584 00:32:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:12.584 { 00:13:12.584 "nbd_device": "/dev/nbd0", 00:13:12.584 "bdev_name": "Malloc0" 00:13:12.584 }, 00:13:12.584 { 00:13:12.585 "nbd_device": "/dev/nbd1", 00:13:12.585 "bdev_name": "Malloc1p0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd10", 00:13:12.585 "bdev_name": "Malloc1p1" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd11", 00:13:12.585 "bdev_name": "Malloc2p0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd12", 00:13:12.585 "bdev_name": "Malloc2p1" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd13", 00:13:12.585 "bdev_name": "Malloc2p2" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd14", 00:13:12.585 "bdev_name": "Malloc2p3" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd15", 00:13:12.585 "bdev_name": "Malloc2p4" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd2", 00:13:12.585 "bdev_name": "Malloc2p5" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd3", 00:13:12.585 "bdev_name": "Malloc2p6" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd4", 00:13:12.585 "bdev_name": "Malloc2p7" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd5", 00:13:12.585 "bdev_name": "TestPT" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd6", 00:13:12.585 "bdev_name": "raid0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd7", 00:13:12.585 "bdev_name": "concat0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd8", 00:13:12.585 "bdev_name": "raid1" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd9", 00:13:12.585 "bdev_name": "AIO0" 00:13:12.585 } 00:13:12.585 ]' 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd0", 00:13:12.585 "bdev_name": "Malloc0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd1", 00:13:12.585 "bdev_name": "Malloc1p0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd10", 00:13:12.585 "bdev_name": "Malloc1p1" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd11", 00:13:12.585 "bdev_name": "Malloc2p0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd12", 00:13:12.585 "bdev_name": "Malloc2p1" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd13", 00:13:12.585 "bdev_name": "Malloc2p2" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd14", 00:13:12.585 "bdev_name": "Malloc2p3" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd15", 00:13:12.585 "bdev_name": "Malloc2p4" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd2", 00:13:12.585 "bdev_name": "Malloc2p5" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd3", 00:13:12.585 "bdev_name": "Malloc2p6" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd4", 00:13:12.585 "bdev_name": "Malloc2p7" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd5", 00:13:12.585 "bdev_name": "TestPT" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd6", 00:13:12.585 "bdev_name": "raid0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd7", 00:13:12.585 "bdev_name": "concat0" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd8", 00:13:12.585 "bdev_name": "raid1" 00:13:12.585 }, 00:13:12.585 { 00:13:12.585 "nbd_device": "/dev/nbd9", 00:13:12.585 "bdev_name": "AIO0" 00:13:12.585 } 00:13:12.585 ]' 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:12.585 /dev/nbd1 00:13:12.585 /dev/nbd10 00:13:12.585 /dev/nbd11 00:13:12.585 /dev/nbd12 00:13:12.585 /dev/nbd13 00:13:12.585 /dev/nbd14 00:13:12.585 /dev/nbd15 00:13:12.585 /dev/nbd2 00:13:12.585 /dev/nbd3 00:13:12.585 /dev/nbd4 00:13:12.585 /dev/nbd5 00:13:12.585 /dev/nbd6 00:13:12.585 /dev/nbd7 00:13:12.585 /dev/nbd8 00:13:12.585 /dev/nbd9' 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:12.585 /dev/nbd1 00:13:12.585 /dev/nbd10 00:13:12.585 /dev/nbd11 00:13:12.585 /dev/nbd12 00:13:12.585 /dev/nbd13 00:13:12.585 /dev/nbd14 00:13:12.585 /dev/nbd15 00:13:12.585 /dev/nbd2 00:13:12.585 /dev/nbd3 00:13:12.585 /dev/nbd4 00:13:12.585 /dev/nbd5 00:13:12.585 /dev/nbd6 00:13:12.585 /dev/nbd7 00:13:12.585 /dev/nbd8 00:13:12.585 /dev/nbd9' 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@65 -- # count=16 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@66 -- # echo 16 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@95 -- # count=16 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:12.585 256+0 records in 00:13:12.585 256+0 records out 00:13:12.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110691 s, 94.7 MB/s 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.585 00:32:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:12.585 256+0 records in 00:13:12.585 256+0 records out 00:13:12.585 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126234 s, 8.3 MB/s 00:13:12.585 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.585 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:12.843 256+0 records in 00:13:12.843 256+0 records out 00:13:12.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123081 s, 8.5 MB/s 00:13:12.843 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.843 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:12.843 256+0 records in 00:13:12.843 256+0 records out 00:13:12.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123988 s, 8.5 MB/s 00:13:12.843 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:12.843 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:13.102 256+0 records in 00:13:13.102 256+0 records out 00:13:13.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12432 s, 8.4 MB/s 00:13:13.102 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.102 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:13.102 256+0 records in 00:13:13.102 256+0 records out 00:13:13.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141209 s, 7.4 MB/s 00:13:13.102 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.102 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:13.361 256+0 records in 00:13:13.361 256+0 records out 00:13:13.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123144 s, 8.5 MB/s 00:13:13.361 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.361 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:13:13.361 256+0 records in 00:13:13.361 256+0 records out 00:13:13.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124568 s, 8.4 MB/s 00:13:13.361 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.361 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:13:13.620 256+0 records in 00:13:13.620 256+0 records out 00:13:13.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135854 s, 7.7 MB/s 00:13:13.620 00:32:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.620 00:32:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:13:13.620 256+0 records in 00:13:13.620 256+0 records out 00:13:13.620 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136919 s, 7.7 MB/s 00:13:13.620 00:32:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.620 00:32:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:13:13.878 256+0 records in 00:13:13.878 256+0 records out 00:13:13.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132452 s, 7.9 MB/s 00:13:13.878 00:32:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.878 00:32:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:13:13.878 256+0 records in 00:13:13.878 256+0 records out 00:13:13.878 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125809 s, 8.3 MB/s 00:13:13.878 00:32:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:13.878 00:32:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:13:14.137 256+0 records in 00:13:14.137 256+0 records out 00:13:14.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121977 s, 8.6 MB/s 00:13:14.137 00:32:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.137 00:32:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:13:14.137 256+0 records in 00:13:14.137 256+0 records out 00:13:14.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126208 s, 8.3 MB/s 00:13:14.137 00:32:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.137 00:32:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:13:14.395 256+0 records in 00:13:14.395 256+0 records out 00:13:14.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124986 s, 8.4 MB/s 00:13:14.395 00:32:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.395 00:32:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:13:14.395 256+0 records in 00:13:14.395 256+0 records out 00:13:14.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130598 s, 8.0 MB/s 00:13:14.395 00:32:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:14.395 00:32:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:13:14.653 256+0 records in 00:13:14.653 256+0 records out 00:13:14.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209253 s, 5.0 MB/s 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.653 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@51 -- # local i 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.912 00:32:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@41 -- # break 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.170 00:32:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@41 -- # break 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.428 00:32:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@41 -- # break 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.687 00:32:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@41 -- # break 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.945 00:32:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@41 -- # break 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.202 00:32:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@41 -- # break 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.461 00:32:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@41 -- # break 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.719 00:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@41 -- # break 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.978 00:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@41 -- # break 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.236 00:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@41 -- # break 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.496 00:32:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@41 -- # break 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.754 00:32:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@41 -- # break 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.012 00:32:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@41 -- # break 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.270 00:32:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@41 -- # break 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.529 00:32:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@41 -- # break 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.787 00:32:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@41 -- # break 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.046 00:32:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@65 -- # true 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@65 -- # count=0 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@104 -- # count=0 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@109 -- # return 0 00:13:19.311 00:32:52 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:19.311 00:32:52 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:19.579 malloc_lvol_verify 00:13:19.579 00:32:53 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:19.838 3b2f62db-2028-41b3-a69a-b5cafc59f87e 00:13:19.838 00:32:53 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:20.096 e8148366-b450-4c79-b25d-e3d6d8b8f2c5 00:13:20.096 00:32:53 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:20.354 /dev/nbd0 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:20.354 mke2fs 1.46.5 (30-Dec-2021) 00:13:20.354 00:13:20.354 Filesystem too small for a journal 00:13:20.354 Discarding device blocks: 0/1024 done 00:13:20.354 Creating filesystem with 1024 4k blocks and 1024 inodes 00:13:20.354 00:13:20.354 Allocating group tables: 0/1 done 00:13:20.354 Writing inode tables: 0/1 done 00:13:20.354 Writing superblocks and filesystem accounting information: 0/1 done 00:13:20.354 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@51 -- # local i 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.354 00:32:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@41 -- # break 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@45 -- # return 0 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:20.612 00:32:54 -- bdev/nbd_common.sh@147 -- # return 0 00:13:20.612 00:32:54 -- bdev/blockdev.sh@326 -- # killprocess 116104 00:13:20.612 00:32:54 -- common/autotest_common.sh@936 -- # '[' -z 116104 ']' 00:13:20.612 00:32:54 -- common/autotest_common.sh@940 -- # kill -0 116104 00:13:20.612 00:32:54 -- common/autotest_common.sh@941 -- # uname 00:13:20.612 00:32:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:20.612 00:32:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 116104 00:13:20.612 00:32:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:20.612 00:32:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:20.612 killing process with pid 116104 00:13:20.612 00:32:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 116104' 00:13:20.612 00:32:54 -- common/autotest_common.sh@955 -- # kill 116104 00:13:20.612 00:32:54 -- common/autotest_common.sh@960 -- # wait 116104 00:13:22.513 00:32:56 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:13:22.513 00:13:22.513 real 0m25.397s 00:13:22.513 user 0m35.017s 00:13:22.513 sys 0m9.017s 00:13:22.513 00:32:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:22.513 00:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:22.513 ************************************ 00:13:22.513 END TEST bdev_nbd 00:13:22.513 ************************************ 00:13:22.792 00:32:56 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:13:22.792 00:32:56 -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.792 00:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:22.792 ************************************ 00:13:22.792 START TEST bdev_fio 00:13:22.792 ************************************ 00:13:22.792 00:32:56 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@331 -- # local env_context 00:13:22.792 00:32:56 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:22.792 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:22.792 00:32:56 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:22.792 00:32:56 -- bdev/blockdev.sh@339 -- # echo '' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:13:22.792 00:32:56 -- bdev/blockdev.sh@339 -- # env_context= 00:13:22.792 00:32:56 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.792 00:32:56 -- common/autotest_common.sh@1267 -- # local workload=verify 00:13:22.792 00:32:56 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:13:22.792 00:32:56 -- common/autotest_common.sh@1269 -- # local env_context= 00:13:22.792 00:32:56 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:13:22.792 00:32:56 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:22.792 00:32:56 -- common/autotest_common.sh@1287 -- # cat 00:13:22.792 00:32:56 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1300 -- # cat 00:13:22.792 00:32:56 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:13:22.792 00:32:56 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:13:22.792 00:32:56 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:22.792 00:32:56 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:13:22.792 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:13:22.792 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.792 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:13:22.793 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:13:22.793 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.793 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:13:22.793 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:13:22.793 00:32:56 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:22.793 00:32:56 -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:13:22.793 00:32:56 -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:13:22.793 00:32:56 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:22.793 00:32:56 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.793 00:32:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:22.793 00:32:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.793 00:32:56 -- common/autotest_common.sh@10 -- # set +x 00:13:22.793 ************************************ 00:13:22.793 START TEST bdev_fio_rw_verify 00:13:22.793 ************************************ 00:13:22.793 00:32:56 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.793 00:32:56 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:22.793 00:32:56 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:13:22.793 00:32:56 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:22.793 00:32:56 -- common/autotest_common.sh@1325 -- # local sanitizers 00:13:22.793 00:32:56 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.793 00:32:56 -- common/autotest_common.sh@1327 -- # shift 00:13:22.793 00:32:56 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:13:22.793 00:32:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:13:22.793 00:32:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:22.793 00:32:56 -- common/autotest_common.sh@1331 -- # grep libasan 00:13:22.793 00:32:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:13:22.793 00:32:56 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:22.793 00:32:56 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:22.793 00:32:56 -- common/autotest_common.sh@1333 -- # break 00:13:22.793 00:32:56 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:22.793 00:32:56 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:23.051 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:23.051 fio-3.35 00:13:23.051 Starting 16 threads 00:13:35.275 00:13:35.275 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=117283: Sat Apr 27 00:33:07 2024 00:13:35.275 read: IOPS=64.9k, BW=253MiB/s (266MB/s)(2534MiB/10002msec) 00:13:35.275 slat (usec): min=2, max=56065, avg=44.79, stdev=494.17 00:13:35.275 clat (usec): min=9, max=56318, avg=342.74, stdev=1370.56 00:13:35.275 lat (usec): min=24, max=56335, avg=387.53, stdev=1456.43 00:13:35.275 clat percentiles (usec): 00:13:35.275 | 50.000th=[ 212], 99.000th=[ 938], 99.900th=[16450], 99.990th=[28967], 00:13:35.275 | 99.999th=[32375] 00:13:35.275 write: IOPS=103k, BW=401MiB/s (420MB/s)(3972MiB/9911msec); 0 zone resets 00:13:35.275 slat (usec): min=9, max=56414, avg=80.40, stdev=772.39 00:13:35.275 clat (usec): min=11, max=81782, avg=458.93, stdev=1769.50 00:13:35.275 lat (usec): min=41, max=81834, avg=539.33, stdev=1930.32 00:13:35.275 clat percentiles (usec): 00:13:35.275 | 50.000th=[ 265], 99.000th=[ 9896], 99.900th=[24249], 99.990th=[40109], 00:13:35.275 | 99.999th=[51643] 00:13:35.275 bw ( KiB/s): min=244353, max=631936, per=99.39%, avg=407915.21, stdev=6942.44, samples=304 00:13:35.275 iops : min=61088, max=157984, avg=101978.58, stdev=1735.61, samples=304 00:13:35.275 lat (usec) : 10=0.01%, 20=0.01%, 50=0.40%, 100=6.71%, 250=46.25% 00:13:35.275 lat (usec) : 500=42.61%, 750=2.54%, 1000=0.11% 00:13:35.275 lat (msec) : 2=0.16%, 4=0.12%, 10=0.22%, 20=0.76%, 50=0.13% 00:13:35.275 lat (msec) : 100=0.01% 00:13:35.275 cpu : usr=56.18%, sys=2.00%, ctx=216497, majf=2, minf=70588 00:13:35.275 IO depths : 1=11.2%, 2=23.6%, 4=52.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.275 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.275 issued rwts: total=648738,1016946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.275 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:35.275 00:13:35.275 Run status group 0 (all jobs): 00:13:35.275 READ: bw=253MiB/s (266MB/s), 253MiB/s-253MiB/s (266MB/s-266MB/s), io=2534MiB (2657MB), run=10002-10002msec 00:13:35.275 WRITE: bw=401MiB/s (420MB/s), 401MiB/s-401MiB/s (420MB/s-420MB/s), io=3972MiB (4165MB), run=9911-9911msec 00:13:37.183 ----------------------------------------------------- 00:13:37.183 Suppressions used: 00:13:37.183 count bytes template 00:13:37.183 16 140 /usr/src/fio/parse.c 00:13:37.183 11376 1092096 /usr/src/fio/iolog.c 00:13:37.183 1 904 libcrypto.so 00:13:37.183 ----------------------------------------------------- 00:13:37.183 00:13:37.183 00:13:37.183 real 0m14.029s 00:13:37.183 user 1m35.896s 00:13:37.183 sys 0m4.026s 00:13:37.183 00:33:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:37.183 ************************************ 00:13:37.183 END TEST bdev_fio_rw_verify 00:13:37.183 ************************************ 00:13:37.183 00:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:37.183 00:33:10 -- bdev/blockdev.sh@350 -- # rm -f 00:13:37.183 00:33:10 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:37.183 00:33:10 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:37.183 00:33:10 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:37.183 00:33:10 -- common/autotest_common.sh@1267 -- # local workload=trim 00:13:37.183 00:33:10 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:13:37.183 00:33:10 -- common/autotest_common.sh@1269 -- # local env_context= 00:13:37.183 00:33:10 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:13:37.183 00:33:10 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:37.183 00:33:10 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:13:37.183 00:33:10 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:13:37.183 00:33:10 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:37.183 00:33:10 -- common/autotest_common.sh@1287 -- # cat 00:13:37.183 00:33:10 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:13:37.183 00:33:10 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:13:37.183 00:33:10 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:13:37.183 00:33:10 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:37.184 00:33:10 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "662d598b-583b-4f15-aa40-184ea4bae8da"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "662d598b-583b-4f15-aa40-184ea4bae8da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "fc3495bf-5c14-57cd-88b3-a9185e571411"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fc3495bf-5c14-57cd-88b3-a9185e571411",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "09b8a3b6-21b6-5b50-8413-52c8031c05c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "09b8a3b6-21b6-5b50-8413-52c8031c05c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "26fe7ac1-4761-5d28-a833-ebfc874ecd09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "26fe7ac1-4761-5d28-a833-ebfc874ecd09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b31a8e02-2df0-5c3c-9486-ac4ffafe91da"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b31a8e02-2df0-5c3c-9486-ac4ffafe91da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1168c312-d62e-5500-9d58-0e8bcc03c3be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1168c312-d62e-5500-9d58-0e8bcc03c3be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f0ac4d95-df5e-5412-afb9-e4d5f8370e5e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f0ac4d95-df5e-5412-afb9-e4d5f8370e5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "741603e3-8fd9-5732-8248-f78d92f1b7bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "741603e3-8fd9-5732-8248-f78d92f1b7bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "a60fd290-d0b0-5e7a-93c3-5ac75a6011de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a60fd290-d0b0-5e7a-93c3-5ac75a6011de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "137ec376-6fd6-55b9-af28-97c73dbf35a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "137ec376-6fd6-55b9-af28-97c73dbf35a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "7f7d1238-7c6e-5e18-a9fd-8b709a524ec0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f7d1238-7c6e-5e18-a9fd-8b709a524ec0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1d7ddd13-136d-5149-8a73-a176d0b4e248"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1d7ddd13-136d-5149-8a73-a176d0b4e248",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d9d25a32-edf3-4e80-9c80-cf8f8682650e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d9d25a32-edf3-4e80-9c80-cf8f8682650e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d9d25a32-edf3-4e80-9c80-cf8f8682650e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0982c67a-94f0-407e-b628-ecc07df8d653",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2d34cf55-c54f-4210-9fe2-306b31ff9b07",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4662b4c1-0c34-4197-aec1-f270f607bc28"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4662b4c1-0c34-4197-aec1-f270f607bc28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4662b4c1-0c34-4197-aec1-f270f607bc28",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7a00da65-f4bc-4164-8dde-43a5bf114cb2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "8c16e05d-e931-4d57-96ce-b9bb3c389048",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b204c3a3-8b5d-4742-a76c-62dce670ce88"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b204c3a3-8b5d-4742-a76c-62dce670ce88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b204c3a3-8b5d-4742-a76c-62dce670ce88",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "85e854c9-9cab-441d-b692-c0d284923b12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "216419db-81fa-4260-a928-f46716e168ac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "91d28b83-2ecc-431f-918b-145bd42b24c3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "91d28b83-2ecc-431f-918b-145bd42b24c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:37.184 00:33:10 -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:13:37.184 Malloc1p0 00:13:37.184 Malloc1p1 00:13:37.184 Malloc2p0 00:13:37.184 Malloc2p1 00:13:37.184 Malloc2p2 00:13:37.184 Malloc2p3 00:13:37.184 Malloc2p4 00:13:37.184 Malloc2p5 00:13:37.184 Malloc2p6 00:13:37.184 Malloc2p7 00:13:37.184 TestPT 00:13:37.184 raid0 00:13:37.184 concat0 ]] 00:13:37.184 00:33:10 -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:37.185 00:33:10 -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "662d598b-583b-4f15-aa40-184ea4bae8da"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "662d598b-583b-4f15-aa40-184ea4bae8da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "fc3495bf-5c14-57cd-88b3-a9185e571411"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "fc3495bf-5c14-57cd-88b3-a9185e571411",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "09b8a3b6-21b6-5b50-8413-52c8031c05c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "09b8a3b6-21b6-5b50-8413-52c8031c05c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "26fe7ac1-4761-5d28-a833-ebfc874ecd09"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "26fe7ac1-4761-5d28-a833-ebfc874ecd09",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b31a8e02-2df0-5c3c-9486-ac4ffafe91da"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b31a8e02-2df0-5c3c-9486-ac4ffafe91da",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "1168c312-d62e-5500-9d58-0e8bcc03c3be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1168c312-d62e-5500-9d58-0e8bcc03c3be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "f0ac4d95-df5e-5412-afb9-e4d5f8370e5e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f0ac4d95-df5e-5412-afb9-e4d5f8370e5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "741603e3-8fd9-5732-8248-f78d92f1b7bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "741603e3-8fd9-5732-8248-f78d92f1b7bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "a60fd290-d0b0-5e7a-93c3-5ac75a6011de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a60fd290-d0b0-5e7a-93c3-5ac75a6011de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "137ec376-6fd6-55b9-af28-97c73dbf35a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "137ec376-6fd6-55b9-af28-97c73dbf35a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "7f7d1238-7c6e-5e18-a9fd-8b709a524ec0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7f7d1238-7c6e-5e18-a9fd-8b709a524ec0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "1d7ddd13-136d-5149-8a73-a176d0b4e248"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1d7ddd13-136d-5149-8a73-a176d0b4e248",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d9d25a32-edf3-4e80-9c80-cf8f8682650e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d9d25a32-edf3-4e80-9c80-cf8f8682650e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d9d25a32-edf3-4e80-9c80-cf8f8682650e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0982c67a-94f0-407e-b628-ecc07df8d653",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2d34cf55-c54f-4210-9fe2-306b31ff9b07",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "4662b4c1-0c34-4197-aec1-f270f607bc28"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "4662b4c1-0c34-4197-aec1-f270f607bc28",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4662b4c1-0c34-4197-aec1-f270f607bc28",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "7a00da65-f4bc-4164-8dde-43a5bf114cb2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "8c16e05d-e931-4d57-96ce-b9bb3c389048",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b204c3a3-8b5d-4742-a76c-62dce670ce88"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b204c3a3-8b5d-4742-a76c-62dce670ce88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b204c3a3-8b5d-4742-a76c-62dce670ce88",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "85e854c9-9cab-441d-b692-c0d284923b12",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "216419db-81fa-4260-a928-f46716e168ac",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "91d28b83-2ecc-431f-918b-145bd42b24c3"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "91d28b83-2ecc-431f-918b-145bd42b24c3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:37.185 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.185 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:13:37.185 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:13:37.185 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.185 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:13:37.185 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:13:37.185 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.185 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:13:37.185 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:13:37.185 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.185 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:13:37.185 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:13:37.185 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.185 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:13:37.185 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:13:37.186 00:33:10 -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:37.186 00:33:10 -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:13:37.186 00:33:10 -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:13:37.186 00:33:10 -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:37.186 00:33:10 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:37.186 00:33:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.186 00:33:10 -- common/autotest_common.sh@10 -- # set +x 00:13:37.186 ************************************ 00:13:37.186 START TEST bdev_fio_trim 00:13:37.186 ************************************ 00:13:37.186 00:33:10 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:37.186 00:33:10 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:37.186 00:33:10 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:13:37.186 00:33:10 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.186 00:33:10 -- common/autotest_common.sh@1325 -- # local sanitizers 00:13:37.186 00:33:10 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.186 00:33:10 -- common/autotest_common.sh@1327 -- # shift 00:13:37.186 00:33:10 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:13:37.186 00:33:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.186 00:33:10 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.186 00:33:10 -- common/autotest_common.sh@1331 -- # grep libasan 00:13:37.186 00:33:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:13:37.186 00:33:10 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:37.186 00:33:10 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:37.186 00:33:10 -- common/autotest_common.sh@1333 -- # break 00:13:37.186 00:33:10 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:37.186 00:33:10 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:37.186 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:37.186 fio-3.35 00:13:37.186 Starting 14 threads 00:13:49.411 00:13:49.411 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=117511: Sat Apr 27 00:33:22 2024 00:13:49.411 write: IOPS=116k, BW=454MiB/s (476MB/s)(4548MiB/10014msec); 0 zone resets 00:13:49.411 slat (usec): min=2, max=36043, avg=45.41, stdev=449.85 00:13:49.411 clat (usec): min=23, max=36336, avg=290.35, stdev=1154.12 00:13:49.411 lat (usec): min=33, max=36366, avg=335.75, stdev=1238.46 00:13:49.411 clat percentiles (usec): 00:13:49.411 | 50.000th=[ 198], 99.000th=[ 429], 99.900th=[16319], 99.990th=[20317], 00:13:49.411 | 99.999th=[28181] 00:13:49.411 bw ( KiB/s): min=330022, max=681178, per=99.82%, avg=464192.37, stdev=8815.02, samples=267 00:13:49.411 iops : min=82505, max=170293, avg=116047.99, stdev=2203.75, samples=267 00:13:49.411 trim: IOPS=116k, BW=454MiB/s (476MB/s)(4548MiB/10014msec); 0 zone resets 00:13:49.411 slat (usec): min=4, max=25119, avg=29.63, stdev=353.65 00:13:49.411 clat (usec): min=4, max=36367, avg=331.94, stdev=1228.74 00:13:49.411 lat (usec): min=14, max=36394, avg=361.56, stdev=1278.42 00:13:49.411 clat percentiles (usec): 00:13:49.411 | 50.000th=[ 229], 99.000th=[ 482], 99.900th=[16319], 99.990th=[21365], 00:13:49.411 | 99.999th=[28181] 00:13:49.411 bw ( KiB/s): min=330022, max=681218, per=99.82%, avg=464193.22, stdev=8815.72, samples=267 00:13:49.411 iops : min=82505, max=170303, avg=116047.99, stdev=2203.93, samples=267 00:13:49.411 lat (usec) : 10=0.01%, 20=0.01%, 50=0.38%, 100=4.81%, 250=59.85% 00:13:49.411 lat (usec) : 500=34.14%, 750=0.15%, 1000=0.01% 00:13:49.411 lat (msec) : 2=0.01%, 4=0.01%, 10=0.05%, 20=0.55%, 50=0.02% 00:13:49.411 cpu : usr=68.81%, sys=0.64%, ctx=168241, majf=0, minf=770 00:13:49.411 IO depths : 1=12.4%, 2=24.9%, 4=50.1%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:49.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.411 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.411 issued rwts: total=0,1164243,1164245,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.411 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:49.411 00:13:49.411 Run status group 0 (all jobs): 00:13:49.411 WRITE: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4548MiB (4769MB), run=10014-10014msec 00:13:49.411 TRIM: bw=454MiB/s (476MB/s), 454MiB/s-454MiB/s (476MB/s-476MB/s), io=4548MiB (4769MB), run=10014-10014msec 00:13:50.787 ----------------------------------------------------- 00:13:50.787 Suppressions used: 00:13:50.787 count bytes template 00:13:50.787 14 129 /usr/src/fio/parse.c 00:13:50.787 1 904 libcrypto.so 00:13:50.787 ----------------------------------------------------- 00:13:50.787 00:13:50.787 00:13:50.787 real 0m13.688s 00:13:50.787 user 1m41.306s 00:13:50.787 sys 0m1.902s 00:13:50.787 00:33:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:50.787 00:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:50.787 ************************************ 00:13:50.787 END TEST bdev_fio_trim 00:13:50.787 ************************************ 00:13:50.787 00:33:24 -- bdev/blockdev.sh@368 -- # rm -f 00:13:50.788 00:33:24 -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:50.788 /home/vagrant/spdk_repo/spdk 00:13:50.788 00:33:24 -- bdev/blockdev.sh@370 -- # popd 00:13:50.788 00:33:24 -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:13:50.788 00:13:50.788 real 0m28.103s 00:13:50.788 user 3m17.467s 00:13:50.788 sys 0m6.045s 00:13:50.788 00:33:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:50.788 ************************************ 00:13:50.788 END TEST bdev_fio 00:13:50.788 ************************************ 00:13:50.788 00:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:50.788 00:33:24 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:50.788 00:33:24 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:50.788 00:33:24 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:50.788 00:33:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.788 00:33:24 -- common/autotest_common.sh@10 -- # set +x 00:13:51.047 ************************************ 00:13:51.047 START TEST bdev_verify 00:13:51.047 ************************************ 00:13:51.047 00:33:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:51.047 [2024-04-27 00:33:24.440268] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:13:51.047 [2024-04-27 00:33:24.440455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117702 ] 00:13:51.047 [2024-04-27 00:33:24.600427] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.306 [2024-04-27 00:33:24.789427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.306 [2024-04-27 00:33:24.789436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.564 [2024-04-27 00:33:25.149652] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:51.564 [2024-04-27 00:33:25.149836] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:51.824 [2024-04-27 00:33:25.157612] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:51.824 [2024-04-27 00:33:25.157699] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:51.824 [2024-04-27 00:33:25.165627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:51.824 [2024-04-27 00:33:25.165691] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:51.824 [2024-04-27 00:33:25.165741] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:51.824 [2024-04-27 00:33:25.350156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:51.824 [2024-04-27 00:33:25.350297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.824 [2024-04-27 00:33:25.350377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:51.824 [2024-04-27 00:33:25.350407] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.824 [2024-04-27 00:33:25.353198] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.824 [2024-04-27 00:33:25.353270] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:52.392 Running I/O for 5 seconds... 00:13:57.666 00:13:57.666 Latency(us) 00:13:57.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.666 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x1000 00:13:57.666 Malloc0 : 5.13 1298.49 5.07 0.00 0.00 98447.78 621.85 285975.27 00:13:57.666 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x1000 length 0x1000 00:13:57.666 Malloc0 : 5.18 1235.44 4.83 0.00 0.00 103472.37 573.44 329824.81 00:13:57.666 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x800 00:13:57.666 Malloc1p0 : 5.13 673.92 2.63 0.00 0.00 189330.69 2829.96 155379.90 00:13:57.666 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x800 length 0x800 00:13:57.666 Malloc1p0 : 5.23 660.92 2.58 0.00 0.00 193057.30 2919.33 181117.67 00:13:57.666 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x800 00:13:57.666 Malloc1p1 : 5.13 673.63 2.63 0.00 0.00 189057.72 2681.02 153473.40 00:13:57.666 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x800 length 0x800 00:13:57.666 Malloc1p1 : 5.23 660.71 2.58 0.00 0.00 192750.82 2829.96 171585.16 00:13:57.666 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p0 : 5.13 673.35 2.63 0.00 0.00 188805.84 2740.60 151566.89 00:13:57.666 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p0 : 5.23 660.52 2.58 0.00 0.00 192478.91 2904.44 163005.91 00:13:57.666 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p1 : 5.13 673.08 2.63 0.00 0.00 188543.66 2591.65 149660.39 00:13:57.666 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p1 : 5.23 660.32 2.58 0.00 0.00 192179.05 2591.65 160146.15 00:13:57.666 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p2 : 5.14 672.80 2.63 0.00 0.00 188301.28 2606.55 147753.89 00:13:57.666 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p2 : 5.24 660.12 2.58 0.00 0.00 191901.82 2532.07 155379.90 00:13:57.666 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p3 : 5.14 672.54 2.63 0.00 0.00 188050.32 2427.81 145847.39 00:13:57.666 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p3 : 5.24 659.92 2.58 0.00 0.00 191637.54 2427.81 148707.14 00:13:57.666 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p4 : 5.14 672.26 2.63 0.00 0.00 187818.93 2308.65 143940.89 00:13:57.666 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p4 : 5.24 659.72 2.58 0.00 0.00 191370.20 2412.92 143940.89 00:13:57.666 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p5 : 5.14 671.96 2.62 0.00 0.00 187600.09 2249.08 142987.64 00:13:57.666 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p5 : 5.24 659.51 2.58 0.00 0.00 191118.60 2398.02 142034.39 00:13:57.666 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p6 : 5.15 671.71 2.62 0.00 0.00 187372.27 2234.18 141081.13 00:13:57.666 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p6 : 5.24 659.31 2.58 0.00 0.00 190873.46 2323.55 140127.88 00:13:57.666 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x200 00:13:57.666 Malloc2p7 : 5.15 671.44 2.62 0.00 0.00 187143.83 2100.13 139174.63 00:13:57.666 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x200 length 0x200 00:13:57.666 Malloc2p7 : 5.24 659.10 2.57 0.00 0.00 190625.04 2204.39 142987.64 00:13:57.666 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x1000 00:13:57.666 TestPT : 5.19 665.36 2.60 0.00 0.00 187927.32 8400.52 138221.38 00:13:57.666 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x1000 length 0x1000 00:13:57.666 TestPT : 5.25 634.51 2.48 0.00 0.00 197598.92 26571.87 213528.20 00:13:57.666 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x2000 00:13:57.666 raid0 : 5.22 686.99 2.68 0.00 0.00 182124.53 3187.43 126782.37 00:13:57.666 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x2000 length 0x2000 00:13:57.666 raid0 : 5.25 658.72 2.57 0.00 0.00 189877.02 3187.43 152520.15 00:13:57.666 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x2000 00:13:57.666 concat0 : 5.22 686.69 2.68 0.00 0.00 181831.20 2740.60 130595.37 00:13:57.666 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x2000 length 0x2000 00:13:57.666 concat0 : 5.25 658.52 2.57 0.00 0.00 189550.56 2666.12 159192.90 00:13:57.666 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x0 length 0x1000 00:13:57.666 raid1 : 5.22 686.40 2.68 0.00 0.00 181541.38 3217.22 135361.63 00:13:57.666 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.666 Verification LBA range: start 0x1000 length 0x1000 00:13:57.666 raid1 : 5.25 658.31 2.57 0.00 0.00 189230.41 3187.43 166818.91 00:13:57.666 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:57.667 Verification LBA range: start 0x0 length 0x4e2 00:13:57.667 AIO0 : 5.22 686.01 2.68 0.00 0.00 180934.49 2904.44 149660.39 00:13:57.667 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:57.667 Verification LBA range: start 0x4e2 length 0x4e2 00:13:57.667 AIO0 : 5.25 657.96 2.57 0.00 0.00 188626.44 2502.28 183024.17 00:13:57.667 =================================================================================================================== 00:13:57.667 Total : 22540.24 88.05 0.00 0.00 179116.57 573.44 329824.81 00:13:59.569 00:13:59.569 real 0m8.562s 00:13:59.569 user 0m14.999s 00:13:59.569 sys 0m0.572s 00:13:59.569 00:33:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:59.569 00:33:32 -- common/autotest_common.sh@10 -- # set +x 00:13:59.569 ************************************ 00:13:59.569 END TEST bdev_verify 00:13:59.569 ************************************ 00:13:59.569 00:33:32 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.569 00:33:32 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:59.569 00:33:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.569 00:33:32 -- common/autotest_common.sh@10 -- # set +x 00:13:59.569 ************************************ 00:13:59.569 START TEST bdev_verify_big_io 00:13:59.569 ************************************ 00:13:59.569 00:33:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.569 [2024-04-27 00:33:33.092507] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:13:59.569 [2024-04-27 00:33:33.092734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117831 ] 00:13:59.827 [2024-04-27 00:33:33.264439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.085 [2024-04-27 00:33:33.451486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.085 [2024-04-27 00:33:33.451500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.343 [2024-04-27 00:33:33.812525] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:00.343 [2024-04-27 00:33:33.812702] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:00.343 [2024-04-27 00:33:33.820512] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:00.343 [2024-04-27 00:33:33.820601] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:00.343 [2024-04-27 00:33:33.828537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:00.343 [2024-04-27 00:33:33.828619] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:00.343 [2024-04-27 00:33:33.828708] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:00.602 [2024-04-27 00:33:34.026152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:00.602 [2024-04-27 00:33:34.026312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.602 [2024-04-27 00:33:34.026397] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:00.602 [2024-04-27 00:33:34.026428] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.602 [2024-04-27 00:33:34.029194] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.602 [2024-04-27 00:33:34.029260] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:00.862 [2024-04-27 00:33:34.372031] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.375423] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.379190] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.383102] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.386308] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.389969] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.393118] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.396921] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.400071] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.403776] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.407125] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.410756] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.413817] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.417392] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.421114] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:00.862 [2024-04-27 00:33:34.424316] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:14:01.120 [2024-04-27 00:33:34.501941] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:01.120 [2024-04-27 00:33:34.508296] bdevperf.c:1817:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:14:01.120 Running I/O for 5 seconds... 00:14:07.687 00:14:07.687 Latency(us) 00:14:07.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.687 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x0 length 0x100 00:14:07.687 Malloc0 : 5.58 298.01 18.63 0.00 0.00 423294.73 722.39 1243039.19 00:14:07.687 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x100 length 0x100 00:14:07.687 Malloc0 : 5.40 331.93 20.75 0.00 0.00 381165.74 673.98 1342177.28 00:14:07.687 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x0 length 0x80 00:14:07.687 Malloc1p0 : 6.15 52.07 3.25 0.00 0.00 2280345.10 1400.09 3599475.43 00:14:07.687 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x80 length 0x80 00:14:07.687 Malloc1p0 : 5.66 181.49 11.34 0.00 0.00 666313.63 3068.28 1243039.19 00:14:07.687 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x0 length 0x80 00:14:07.687 Malloc1p1 : 6.15 52.06 3.25 0.00 0.00 2226771.14 1474.56 3492711.33 00:14:07.687 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x80 length 0x80 00:14:07.687 Malloc1p1 : 5.80 66.22 4.14 0.00 0.00 1819949.43 1563.93 2684354.56 00:14:07.687 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x0 length 0x20 00:14:07.687 Malloc2p0 : 5.81 41.29 2.58 0.00 0.00 711448.56 1169.22 1357429.29 00:14:07.687 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x20 length 0x20 00:14:07.687 Malloc2p0 : 5.67 50.81 3.18 0.00 0.00 590631.89 819.20 892242.85 00:14:07.687 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x0 length 0x20 00:14:07.687 Malloc2p1 : 5.81 41.28 2.58 0.00 0.00 706678.09 595.78 1334551.27 00:14:07.687 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x20 length 0x20 00:14:07.687 Malloc2p1 : 5.67 50.80 3.17 0.00 0.00 587973.94 647.91 873177.83 00:14:07.687 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x0 length 0x20 00:14:07.687 Malloc2p2 : 5.82 41.27 2.58 0.00 0.00 702232.96 673.98 1319299.26 00:14:07.687 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.687 Verification LBA range: start 0x20 length 0x20 00:14:07.687 Malloc2p2 : 5.67 50.78 3.17 0.00 0.00 584916.43 681.43 861738.82 00:14:07.687 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x20 00:14:07.688 Malloc2p3 : 5.82 41.26 2.58 0.00 0.00 697378.31 595.78 1296421.24 00:14:07.688 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x20 length 0x20 00:14:07.688 Malloc2p3 : 5.67 50.77 3.17 0.00 0.00 582349.76 644.19 846486.81 00:14:07.688 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x20 00:14:07.688 Malloc2p4 : 5.82 41.24 2.58 0.00 0.00 693504.66 577.16 1281169.22 00:14:07.688 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x20 length 0x20 00:14:07.688 Malloc2p4 : 5.67 50.76 3.17 0.00 0.00 579530.44 677.70 835047.80 00:14:07.688 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x20 00:14:07.688 Malloc2p5 : 5.82 41.23 2.58 0.00 0.00 689325.30 655.36 1265917.21 00:14:07.688 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x20 length 0x20 00:14:07.688 Malloc2p5 : 5.68 50.74 3.17 0.00 0.00 577191.34 1266.04 819795.78 00:14:07.688 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x20 00:14:07.688 Malloc2p6 : 5.82 41.22 2.58 0.00 0.00 685386.43 700.04 1243039.19 00:14:07.688 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x20 length 0x20 00:14:07.688 Malloc2p6 : 5.68 50.73 3.17 0.00 0.00 574820.12 655.36 808356.77 00:14:07.688 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x20 00:14:07.688 Malloc2p7 : 5.82 41.21 2.58 0.00 0.00 680870.91 1258.59 1227787.17 00:14:07.688 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x20 length 0x20 00:14:07.688 Malloc2p7 : 5.68 50.71 3.17 0.00 0.00 572384.69 659.08 796917.76 00:14:07.688 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x100 00:14:07.688 TestPT : 6.24 56.37 3.52 0.00 0.00 1915137.98 1377.75 3218175.07 00:14:07.688 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x100 length 0x100 00:14:07.688 TestPT : 5.91 62.94 3.93 0.00 0.00 1798533.34 65297.69 2272550.17 00:14:07.688 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x200 00:14:07.688 raid0 : 6.05 61.53 3.85 0.00 0.00 1755412.82 1563.93 3111410.97 00:14:07.688 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x200 length 0x200 00:14:07.688 raid0 : 5.95 67.26 4.20 0.00 0.00 1654410.24 2159.71 2394566.28 00:14:07.688 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x200 00:14:07.688 concat0 : 6.11 73.38 4.59 0.00 0.00 1449826.14 1526.69 3004646.87 00:14:07.688 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x200 length 0x200 00:14:07.688 concat0 : 5.96 72.51 4.53 0.00 0.00 1524919.07 1534.14 2318306.21 00:14:07.688 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x100 00:14:07.688 raid1 : 6.18 93.27 5.83 0.00 0.00 1121946.11 1839.48 2928386.79 00:14:07.688 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x100 length 0x100 00:14:07.688 raid1 : 5.95 77.98 4.87 0.00 0.00 1396875.29 1861.82 2287802.18 00:14:07.688 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x0 length 0x4e 00:14:07.688 AIO0 : 6.20 93.56 5.85 0.00 0.00 670513.23 1511.80 1792111.71 00:14:07.688 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:14:07.688 Verification LBA range: start 0x4e length 0x4e 00:14:07.688 AIO0 : 5.96 87.98 5.50 0.00 0.00 750615.35 852.71 1380307.32 00:14:07.688 =================================================================================================================== 00:14:07.688 Total : 2464.64 154.04 0.00 0.00 907350.46 577.16 3599475.43 00:14:09.591 00:14:09.591 real 0m9.981s 00:14:09.591 user 0m18.242s 00:14:09.591 sys 0m0.537s 00:14:09.591 00:33:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:09.591 00:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.591 ************************************ 00:14:09.591 END TEST bdev_verify_big_io 00:14:09.591 ************************************ 00:14:09.591 00:33:43 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.591 00:33:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:09.591 00:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.591 00:33:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.591 ************************************ 00:14:09.591 START TEST bdev_write_zeroes 00:14:09.591 ************************************ 00:14:09.591 00:33:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.591 [2024-04-27 00:33:43.170495] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:09.591 [2024-04-27 00:33:43.171187] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117979 ] 00:14:09.849 [2024-04-27 00:33:43.340100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.107 [2024-04-27 00:33:43.529305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.366 [2024-04-27 00:33:43.886196] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:10.366 [2024-04-27 00:33:43.886318] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:14:10.366 [2024-04-27 00:33:43.894167] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:10.366 [2024-04-27 00:33:43.894229] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:14:10.366 [2024-04-27 00:33:43.902193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:10.366 [2024-04-27 00:33:43.902247] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:14:10.366 [2024-04-27 00:33:43.902284] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:14:10.624 [2024-04-27 00:33:44.087156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:10.624 [2024-04-27 00:33:44.087311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.624 [2024-04-27 00:33:44.087344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:14:10.624 [2024-04-27 00:33:44.087372] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.624 [2024-04-27 00:33:44.089793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.624 [2024-04-27 00:33:44.089865] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:14:10.883 Running I/O for 1 seconds... 00:14:12.260 00:14:12.260 Latency(us) 00:14:12.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.260 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.260 Malloc0 : 1.04 5638.86 22.03 0.00 0.00 22684.31 785.69 40274.85 00:14:12.260 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.260 Malloc1p0 : 1.05 5632.53 22.00 0.00 0.00 22673.31 878.78 39321.60 00:14:12.260 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.260 Malloc1p1 : 1.05 5626.88 21.98 0.00 0.00 22647.92 830.37 38606.66 00:14:12.260 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.260 Malloc2p0 : 1.05 5621.28 21.96 0.00 0.00 22625.83 889.95 37653.41 00:14:12.260 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.260 Malloc2p1 : 1.05 5615.36 21.94 0.00 0.00 22611.16 897.40 36700.16 00:14:12.260 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.260 Malloc2p2 : 1.05 5608.84 21.91 0.00 0.00 22585.84 882.50 35746.91 00:14:12.260 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 Malloc2p3 : 1.05 5602.53 21.88 0.00 0.00 22569.88 912.29 34793.66 00:14:12.261 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 Malloc2p4 : 1.05 5596.66 21.86 0.00 0.00 22549.71 837.82 34078.72 00:14:12.261 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 Malloc2p5 : 1.05 5590.49 21.84 0.00 0.00 22532.07 875.05 33363.78 00:14:12.261 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 Malloc2p6 : 1.05 5584.65 21.82 0.00 0.00 22513.74 848.99 32410.53 00:14:12.261 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 Malloc2p7 : 1.06 5578.39 21.79 0.00 0.00 22494.97 1072.41 31457.28 00:14:12.261 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 TestPT : 1.06 5572.58 21.77 0.00 0.00 22468.36 968.15 30265.72 00:14:12.261 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 raid0 : 1.06 5565.46 21.74 0.00 0.00 22433.17 1839.48 28240.06 00:14:12.261 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 concat0 : 1.06 5558.89 21.71 0.00 0.00 22378.77 1563.93 26691.03 00:14:12.261 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 raid1 : 1.06 5550.36 21.68 0.00 0.00 22326.31 2368.23 24903.68 00:14:12.261 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:12.261 AIO0 : 1.06 5537.34 21.63 0.00 0.00 22274.29 1608.61 25022.84 00:14:12.261 =================================================================================================================== 00:14:12.261 Total : 89481.11 349.54 0.00 0.00 22523.12 785.69 40274.85 00:14:14.163 00:14:14.163 real 0m4.295s 00:14:14.163 user 0m3.690s 00:14:14.163 sys 0m0.413s 00:14:14.163 00:33:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.163 00:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:14.163 ************************************ 00:14:14.163 END TEST bdev_write_zeroes 00:14:14.163 ************************************ 00:14:14.163 00:33:47 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:14.163 00:33:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:14.163 00:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.163 00:33:47 -- common/autotest_common.sh@10 -- # set +x 00:14:14.163 ************************************ 00:14:14.163 START TEST bdev_json_nonenclosed 00:14:14.163 ************************************ 00:14:14.163 00:33:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:14.163 [2024-04-27 00:33:47.551616] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:14.163 [2024-04-27 00:33:47.552075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118057 ] 00:14:14.163 [2024-04-27 00:33:47.717271] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.422 [2024-04-27 00:33:47.904419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.422 [2024-04-27 00:33:47.904571] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:14.422 [2024-04-27 00:33:47.904609] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:14.422 [2024-04-27 00:33:47.904642] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:14.681 00:14:14.681 real 0m0.777s 00:14:14.681 user 0m0.545s 00:14:14.681 sys 0m0.132s 00:14:14.681 00:33:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.681 ************************************ 00:14:14.681 END TEST bdev_json_nonenclosed 00:14:14.681 ************************************ 00:14:14.681 00:33:48 -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 00:33:48 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:14.939 00:33:48 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:14.939 00:33:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.939 00:33:48 -- common/autotest_common.sh@10 -- # set +x 00:14:14.939 ************************************ 00:14:14.939 START TEST bdev_json_nonarray 00:14:14.939 ************************************ 00:14:14.939 00:33:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:14.939 [2024-04-27 00:33:48.413142] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:14.939 [2024-04-27 00:33:48.413341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118101 ] 00:14:15.199 [2024-04-27 00:33:48.581624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.199 [2024-04-27 00:33:48.774246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.199 [2024-04-27 00:33:48.774424] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:15.199 [2024-04-27 00:33:48.774481] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:15.199 [2024-04-27 00:33:48.774530] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:15.768 00:14:15.768 real 0m0.786s 00:14:15.768 user 0m0.546s 00:14:15.768 sys 0m0.140s 00:14:15.768 00:33:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:15.768 ************************************ 00:14:15.768 END TEST bdev_json_nonarray 00:14:15.768 ************************************ 00:14:15.768 00:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 00:33:49 -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:14:15.768 00:33:49 -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:14:15.768 00:33:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:15.768 00:33:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.768 00:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 ************************************ 00:14:15.768 START TEST bdev_qos 00:14:15.768 ************************************ 00:14:15.768 00:33:49 -- common/autotest_common.sh@1111 -- # qos_test_suite '' 00:14:15.768 00:33:49 -- bdev/blockdev.sh@446 -- # QOS_PID=118138 00:14:15.768 Process qos testing pid: 118138 00:14:15.768 00:33:49 -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 118138' 00:14:15.768 00:33:49 -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:14:15.768 00:33:49 -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:14:15.768 00:33:49 -- bdev/blockdev.sh@449 -- # waitforlisten 118138 00:14:15.768 00:33:49 -- common/autotest_common.sh@817 -- # '[' -z 118138 ']' 00:14:15.768 00:33:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.768 00:33:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:15.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.768 00:33:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.768 00:33:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:15.768 00:33:49 -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 [2024-04-27 00:33:49.298940] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:15.768 [2024-04-27 00:33:49.299149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118138 ] 00:14:16.026 [2024-04-27 00:33:49.469490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.284 [2024-04-27 00:33:49.699871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.851 00:33:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:16.851 00:33:50 -- common/autotest_common.sh@850 -- # return 0 00:14:16.851 00:33:50 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:14:16.851 00:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.851 00:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.851 Malloc_0 00:14:16.851 00:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.851 00:33:50 -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:14:16.851 00:33:50 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_0 00:14:16.851 00:33:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:16.851 00:33:50 -- common/autotest_common.sh@887 -- # local i 00:14:16.851 00:33:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:16.851 00:33:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:16.851 00:33:50 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:14:16.851 00:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.851 00:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.851 00:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.851 00:33:50 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:14:16.851 00:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.851 00:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:16.851 [ 00:14:16.851 { 00:14:16.851 "name": "Malloc_0", 00:14:16.852 "aliases": [ 00:14:16.852 "2f5feeca-ccff-4b01-8577-70f0d8b722b3" 00:14:16.852 ], 00:14:16.852 "product_name": "Malloc disk", 00:14:16.852 "block_size": 512, 00:14:16.852 "num_blocks": 262144, 00:14:16.852 "uuid": "2f5feeca-ccff-4b01-8577-70f0d8b722b3", 00:14:16.852 "assigned_rate_limits": { 00:14:16.852 "rw_ios_per_sec": 0, 00:14:16.852 "rw_mbytes_per_sec": 0, 00:14:16.852 "r_mbytes_per_sec": 0, 00:14:16.852 "w_mbytes_per_sec": 0 00:14:16.852 }, 00:14:16.852 "claimed": false, 00:14:16.852 "zoned": false, 00:14:16.852 "supported_io_types": { 00:14:16.852 "read": true, 00:14:16.852 "write": true, 00:14:16.852 "unmap": true, 00:14:16.852 "write_zeroes": true, 00:14:16.852 "flush": true, 00:14:16.852 "reset": true, 00:14:16.852 "compare": false, 00:14:16.852 "compare_and_write": false, 00:14:16.852 "abort": true, 00:14:16.852 "nvme_admin": false, 00:14:16.852 "nvme_io": false 00:14:16.852 }, 00:14:16.852 "memory_domains": [ 00:14:16.852 { 00:14:16.852 "dma_device_id": "system", 00:14:16.852 "dma_device_type": 1 00:14:16.852 }, 00:14:16.852 { 00:14:16.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.852 "dma_device_type": 2 00:14:16.852 } 00:14:16.852 ], 00:14:16.852 "driver_specific": {} 00:14:16.852 } 00:14:16.852 ] 00:14:16.852 00:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.852 00:33:50 -- common/autotest_common.sh@893 -- # return 0 00:14:16.852 00:33:50 -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:14:16.852 00:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.852 00:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:17.110 Null_1 00:14:17.110 00:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.110 00:33:50 -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:14:17.110 00:33:50 -- common/autotest_common.sh@885 -- # local bdev_name=Null_1 00:14:17.110 00:33:50 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:17.110 00:33:50 -- common/autotest_common.sh@887 -- # local i 00:14:17.110 00:33:50 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:17.110 00:33:50 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:17.110 00:33:50 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:14:17.110 00:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.110 00:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:17.110 00:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.110 00:33:50 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:14:17.110 00:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:17.110 00:33:50 -- common/autotest_common.sh@10 -- # set +x 00:14:17.110 [ 00:14:17.110 { 00:14:17.110 "name": "Null_1", 00:14:17.110 "aliases": [ 00:14:17.110 "e08d71aa-a88c-4b33-bab5-e5a375273e2e" 00:14:17.110 ], 00:14:17.110 "product_name": "Null disk", 00:14:17.110 "block_size": 512, 00:14:17.110 "num_blocks": 262144, 00:14:17.110 "uuid": "e08d71aa-a88c-4b33-bab5-e5a375273e2e", 00:14:17.110 "assigned_rate_limits": { 00:14:17.110 "rw_ios_per_sec": 0, 00:14:17.110 "rw_mbytes_per_sec": 0, 00:14:17.110 "r_mbytes_per_sec": 0, 00:14:17.110 "w_mbytes_per_sec": 0 00:14:17.110 }, 00:14:17.110 "claimed": false, 00:14:17.110 "zoned": false, 00:14:17.110 "supported_io_types": { 00:14:17.110 "read": true, 00:14:17.110 "write": true, 00:14:17.110 "unmap": false, 00:14:17.110 "write_zeroes": true, 00:14:17.110 "flush": false, 00:14:17.110 "reset": true, 00:14:17.110 "compare": false, 00:14:17.110 "compare_and_write": false, 00:14:17.110 "abort": true, 00:14:17.110 "nvme_admin": false, 00:14:17.110 "nvme_io": false 00:14:17.110 }, 00:14:17.110 "driver_specific": {} 00:14:17.110 } 00:14:17.110 ] 00:14:17.110 00:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:17.110 00:33:50 -- common/autotest_common.sh@893 -- # return 0 00:14:17.110 00:33:50 -- bdev/blockdev.sh@457 -- # qos_function_test 00:14:17.110 00:33:50 -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:14:17.110 00:33:50 -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:17.110 00:33:50 -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:14:17.110 00:33:50 -- bdev/blockdev.sh@412 -- # local io_result=0 00:14:17.110 00:33:50 -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:14:17.110 00:33:50 -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:14:17.110 00:33:50 -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:14:17.110 00:33:50 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:14:17.110 00:33:50 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:17.110 00:33:50 -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:17.110 00:33:50 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:17.110 00:33:50 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:17.110 00:33:50 -- bdev/blockdev.sh@378 -- # tail -1 00:14:17.110 Running I/O for 60 seconds... 00:14:22.384 00:33:55 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 75307.65 301230.61 0.00 0.00 304128.00 0.00 0.00 ' 00:14:22.384 00:33:55 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:22.384 00:33:55 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:22.384 00:33:55 -- bdev/blockdev.sh@380 -- # iostat_result=75307.65 00:14:22.384 00:33:55 -- bdev/blockdev.sh@385 -- # echo 75307 00:14:22.384 00:33:55 -- bdev/blockdev.sh@416 -- # io_result=75307 00:14:22.384 00:33:55 -- bdev/blockdev.sh@418 -- # iops_limit=18000 00:14:22.384 00:33:55 -- bdev/blockdev.sh@419 -- # '[' 18000 -gt 1000 ']' 00:14:22.384 00:33:55 -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 18000 Malloc_0 00:14:22.384 00:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:22.384 00:33:55 -- common/autotest_common.sh@10 -- # set +x 00:14:22.384 00:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:22.384 00:33:55 -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 18000 IOPS Malloc_0 00:14:22.384 00:33:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:22.384 00:33:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.384 00:33:55 -- common/autotest_common.sh@10 -- # set +x 00:14:22.384 ************************************ 00:14:22.384 START TEST bdev_qos_iops 00:14:22.384 ************************************ 00:14:22.384 00:33:55 -- common/autotest_common.sh@1111 -- # run_qos_test 18000 IOPS Malloc_0 00:14:22.384 00:33:55 -- bdev/blockdev.sh@389 -- # local qos_limit=18000 00:14:22.384 00:33:55 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:22.384 00:33:55 -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:14:22.384 00:33:55 -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:14:22.384 00:33:55 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:22.384 00:33:55 -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:22.384 00:33:55 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:22.384 00:33:55 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:22.384 00:33:55 -- bdev/blockdev.sh@378 -- # tail -1 00:14:27.658 00:34:00 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 18057.47 72229.87 0.00 0.00 73584.00 0.00 0.00 ' 00:14:27.658 00:34:00 -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:27.658 00:34:00 -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:27.658 00:34:00 -- bdev/blockdev.sh@380 -- # iostat_result=18057.47 00:14:27.658 00:34:00 -- bdev/blockdev.sh@385 -- # echo 18057 00:14:27.658 00:34:00 -- bdev/blockdev.sh@392 -- # qos_result=18057 00:14:27.658 00:34:00 -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:14:27.658 00:34:00 -- bdev/blockdev.sh@396 -- # lower_limit=16200 00:14:27.658 00:34:00 -- bdev/blockdev.sh@397 -- # upper_limit=19800 00:14:27.658 00:34:00 -- bdev/blockdev.sh@400 -- # '[' 18057 -lt 16200 ']' 00:14:27.658 00:34:00 -- bdev/blockdev.sh@400 -- # '[' 18057 -gt 19800 ']' 00:14:27.658 00:14:27.658 real 0m5.193s 00:14:27.658 user 0m0.089s 00:14:27.658 sys 0m0.036s 00:14:27.658 00:34:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:27.658 00:34:00 -- common/autotest_common.sh@10 -- # set +x 00:14:27.658 ************************************ 00:14:27.658 END TEST bdev_qos_iops 00:14:27.658 ************************************ 00:14:27.658 00:34:00 -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:14:27.658 00:34:00 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:27.658 00:34:00 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:14:27.658 00:34:00 -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:27.658 00:34:00 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:27.658 00:34:00 -- bdev/blockdev.sh@378 -- # tail -1 00:14:27.658 00:34:00 -- bdev/blockdev.sh@378 -- # grep Null_1 00:14:32.920 00:34:06 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 30834.28 123337.13 0.00 0.00 124928.00 0.00 0.00 ' 00:14:32.920 00:34:06 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:32.920 00:34:06 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:32.920 00:34:06 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:32.920 00:34:06 -- bdev/blockdev.sh@382 -- # iostat_result=124928.00 00:14:32.920 00:34:06 -- bdev/blockdev.sh@385 -- # echo 124928 00:14:32.920 00:34:06 -- bdev/blockdev.sh@427 -- # bw_limit=124928 00:14:32.920 00:34:06 -- bdev/blockdev.sh@428 -- # bw_limit=12 00:14:32.920 00:34:06 -- bdev/blockdev.sh@429 -- # '[' 12 -lt 2 ']' 00:14:32.920 00:34:06 -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:32.920 00:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.920 00:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.920 00:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.920 00:34:06 -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:32.920 00:34:06 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:32.920 00:34:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.920 00:34:06 -- common/autotest_common.sh@10 -- # set +x 00:14:32.920 ************************************ 00:14:32.920 START TEST bdev_qos_bw 00:14:32.920 ************************************ 00:14:32.920 00:34:06 -- common/autotest_common.sh@1111 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:32.920 00:34:06 -- bdev/blockdev.sh@389 -- # local qos_limit=12 00:14:32.920 00:34:06 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:32.920 00:34:06 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:14:32.920 00:34:06 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:32.920 00:34:06 -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:14:32.920 00:34:06 -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:32.920 00:34:06 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:32.920 00:34:06 -- bdev/blockdev.sh@378 -- # grep Null_1 00:14:32.920 00:34:06 -- bdev/blockdev.sh@378 -- # tail -1 00:14:38.183 00:34:11 -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3069.10 12276.41 0.00 0.00 12496.00 0.00 0.00 ' 00:14:38.183 00:34:11 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:38.183 00:34:11 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:38.183 00:34:11 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:38.183 00:34:11 -- bdev/blockdev.sh@382 -- # iostat_result=12496.00 00:14:38.183 00:34:11 -- bdev/blockdev.sh@385 -- # echo 12496 00:14:38.183 ************************************ 00:14:38.183 END TEST bdev_qos_bw 00:14:38.183 ************************************ 00:14:38.183 00:34:11 -- bdev/blockdev.sh@392 -- # qos_result=12496 00:14:38.183 00:34:11 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:38.183 00:34:11 -- bdev/blockdev.sh@394 -- # qos_limit=12288 00:14:38.183 00:34:11 -- bdev/blockdev.sh@396 -- # lower_limit=11059 00:14:38.183 00:34:11 -- bdev/blockdev.sh@397 -- # upper_limit=13516 00:14:38.183 00:34:11 -- bdev/blockdev.sh@400 -- # '[' 12496 -lt 11059 ']' 00:14:38.183 00:34:11 -- bdev/blockdev.sh@400 -- # '[' 12496 -gt 13516 ']' 00:14:38.183 00:14:38.183 real 0m5.246s 00:14:38.183 user 0m0.117s 00:14:38.183 sys 0m0.032s 00:14:38.183 00:34:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.183 00:34:11 -- common/autotest_common.sh@10 -- # set +x 00:14:38.183 00:34:11 -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:38.183 00:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.183 00:34:11 -- common/autotest_common.sh@10 -- # set +x 00:14:38.183 00:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.183 00:34:11 -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:38.183 00:34:11 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:38.183 00:34:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.183 00:34:11 -- common/autotest_common.sh@10 -- # set +x 00:14:38.183 ************************************ 00:14:38.183 START TEST bdev_qos_ro_bw 00:14:38.183 ************************************ 00:14:38.183 00:34:11 -- common/autotest_common.sh@1111 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:38.183 00:34:11 -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:14:38.183 00:34:11 -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:38.183 00:34:11 -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:14:38.183 00:34:11 -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:38.183 00:34:11 -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:38.183 00:34:11 -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:38.183 00:34:11 -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:38.183 00:34:11 -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:38.183 00:34:11 -- bdev/blockdev.sh@378 -- # tail -1 00:14:43.454 00:34:16 -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.19 2048.78 0.00 0.00 2064.00 0.00 0.00 ' 00:14:43.454 00:34:16 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:43.454 00:34:16 -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:43.454 00:34:16 -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:43.454 00:34:16 -- bdev/blockdev.sh@382 -- # iostat_result=2064.00 00:14:43.454 00:34:16 -- bdev/blockdev.sh@385 -- # echo 2064 00:14:43.454 ************************************ 00:14:43.454 END TEST bdev_qos_ro_bw 00:14:43.454 ************************************ 00:14:43.454 00:34:16 -- bdev/blockdev.sh@392 -- # qos_result=2064 00:14:43.454 00:34:16 -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:43.454 00:34:16 -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:14:43.454 00:34:16 -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:14:43.454 00:34:16 -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:14:43.454 00:34:16 -- bdev/blockdev.sh@400 -- # '[' 2064 -lt 1843 ']' 00:14:43.454 00:34:16 -- bdev/blockdev.sh@400 -- # '[' 2064 -gt 2252 ']' 00:14:43.454 00:14:43.454 real 0m5.170s 00:14:43.454 user 0m0.119s 00:14:43.454 sys 0m0.027s 00:14:43.454 00:34:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:43.454 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:14:43.454 00:34:16 -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:43.454 00:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.454 00:34:16 -- common/autotest_common.sh@10 -- # set +x 00:14:44.021 00:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:44.021 00:34:17 -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:14:44.021 00:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:44.021 00:34:17 -- common/autotest_common.sh@10 -- # set +x 00:14:44.021 00:14:44.021 Latency(us) 00:14:44.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.022 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:44.022 Malloc_0 : 26.72 24964.71 97.52 0.00 0.00 10159.72 2278.87 503316.48 00:14:44.022 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:44.022 Null_1 : 26.91 27146.20 106.04 0.00 0.00 9411.20 692.60 187790.43 00:14:44.022 =================================================================================================================== 00:14:44.022 Total : 52110.91 203.56 0.00 0.00 9768.48 692.60 503316.48 00:14:44.022 0 00:14:44.022 00:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:44.022 00:34:17 -- bdev/blockdev.sh@461 -- # killprocess 118138 00:14:44.022 00:34:17 -- common/autotest_common.sh@936 -- # '[' -z 118138 ']' 00:14:44.022 00:34:17 -- common/autotest_common.sh@940 -- # kill -0 118138 00:14:44.022 00:34:17 -- common/autotest_common.sh@941 -- # uname 00:14:44.022 00:34:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:44.022 00:34:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118138 00:14:44.022 00:34:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:44.022 00:34:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:44.022 killing process with pid 118138 00:14:44.022 00:34:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118138' 00:14:44.022 Received shutdown signal, test time was about 26.939050 seconds 00:14:44.022 00:14:44.022 Latency(us) 00:14:44.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.022 =================================================================================================================== 00:14:44.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.022 00:34:17 -- common/autotest_common.sh@955 -- # kill 118138 00:14:44.022 00:34:17 -- common/autotest_common.sh@960 -- # wait 118138 00:14:45.423 00:34:18 -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:14:45.423 00:14:45.423 real 0m29.499s 00:14:45.423 user 0m30.343s 00:14:45.423 sys 0m0.676s 00:14:45.423 00:34:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.423 00:34:18 -- common/autotest_common.sh@10 -- # set +x 00:14:45.423 ************************************ 00:14:45.423 END TEST bdev_qos 00:14:45.423 ************************************ 00:14:45.423 00:34:18 -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:45.423 00:34:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.423 00:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.423 00:34:18 -- common/autotest_common.sh@10 -- # set +x 00:14:45.423 ************************************ 00:14:45.423 START TEST bdev_qd_sampling 00:14:45.423 ************************************ 00:14:45.423 00:34:18 -- common/autotest_common.sh@1111 -- # qd_sampling_test_suite '' 00:14:45.423 00:34:18 -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:14:45.423 00:34:18 -- bdev/blockdev.sh@541 -- # QD_PID=118635 00:14:45.423 00:34:18 -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 118635' 00:14:45.423 Process bdev QD sampling period testing pid: 118635 00:14:45.423 00:34:18 -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:45.423 00:34:18 -- bdev/blockdev.sh@544 -- # waitforlisten 118635 00:14:45.423 00:34:18 -- common/autotest_common.sh@817 -- # '[' -z 118635 ']' 00:14:45.423 00:34:18 -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:45.423 00:34:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.423 00:34:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:45.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.423 00:34:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.423 00:34:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:45.423 00:34:18 -- common/autotest_common.sh@10 -- # set +x 00:14:45.423 [2024-04-27 00:34:18.883680] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:45.423 [2024-04-27 00:34:18.884064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118635 ] 00:14:45.681 [2024-04-27 00:34:19.059385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.939 [2024-04-27 00:34:19.289156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.939 [2024-04-27 00:34:19.289161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.198 00:34:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.198 00:34:19 -- common/autotest_common.sh@850 -- # return 0 00:14:46.198 00:34:19 -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:46.198 00:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.198 00:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:46.456 Malloc_QD 00:14:46.456 00:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.456 00:34:19 -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:14:46.456 00:34:19 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_QD 00:14:46.456 00:34:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:46.456 00:34:19 -- common/autotest_common.sh@887 -- # local i 00:14:46.456 00:34:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:46.456 00:34:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:46.456 00:34:19 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:14:46.456 00:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.456 00:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:46.456 00:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.456 00:34:19 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:46.456 00:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.456 00:34:19 -- common/autotest_common.sh@10 -- # set +x 00:14:46.456 [ 00:14:46.456 { 00:14:46.456 "name": "Malloc_QD", 00:14:46.456 "aliases": [ 00:14:46.456 "aeefc538-ad79-445d-ba28-7fc279ebc039" 00:14:46.456 ], 00:14:46.456 "product_name": "Malloc disk", 00:14:46.456 "block_size": 512, 00:14:46.456 "num_blocks": 262144, 00:14:46.456 "uuid": "aeefc538-ad79-445d-ba28-7fc279ebc039", 00:14:46.456 "assigned_rate_limits": { 00:14:46.456 "rw_ios_per_sec": 0, 00:14:46.456 "rw_mbytes_per_sec": 0, 00:14:46.456 "r_mbytes_per_sec": 0, 00:14:46.456 "w_mbytes_per_sec": 0 00:14:46.456 }, 00:14:46.456 "claimed": false, 00:14:46.456 "zoned": false, 00:14:46.456 "supported_io_types": { 00:14:46.456 "read": true, 00:14:46.456 "write": true, 00:14:46.456 "unmap": true, 00:14:46.456 "write_zeroes": true, 00:14:46.456 "flush": true, 00:14:46.456 "reset": true, 00:14:46.456 "compare": false, 00:14:46.456 "compare_and_write": false, 00:14:46.456 "abort": true, 00:14:46.456 "nvme_admin": false, 00:14:46.456 "nvme_io": false 00:14:46.456 }, 00:14:46.456 "memory_domains": [ 00:14:46.456 { 00:14:46.456 "dma_device_id": "system", 00:14:46.457 "dma_device_type": 1 00:14:46.457 }, 00:14:46.457 { 00:14:46.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.457 "dma_device_type": 2 00:14:46.457 } 00:14:46.457 ], 00:14:46.457 "driver_specific": {} 00:14:46.457 } 00:14:46.457 ] 00:14:46.457 00:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.457 00:34:19 -- common/autotest_common.sh@893 -- # return 0 00:14:46.457 00:34:19 -- bdev/blockdev.sh@550 -- # sleep 2 00:14:46.457 00:34:19 -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:46.715 Running I/O for 5 seconds... 00:14:48.613 00:34:21 -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:14:48.613 00:34:21 -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:14:48.613 00:34:21 -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:14:48.613 00:34:21 -- bdev/blockdev.sh@521 -- # local iostats 00:14:48.613 00:34:21 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:48.613 00:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.614 00:34:21 -- common/autotest_common.sh@10 -- # set +x 00:14:48.614 00:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.614 00:34:21 -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:48.614 00:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.614 00:34:21 -- common/autotest_common.sh@10 -- # set +x 00:14:48.614 00:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.614 00:34:21 -- bdev/blockdev.sh@525 -- # iostats='{ 00:14:48.614 "tick_rate": 2200000000, 00:14:48.614 "ticks": 1672490441294, 00:14:48.614 "bdevs": [ 00:14:48.614 { 00:14:48.614 "name": "Malloc_QD", 00:14:48.614 "bytes_read": 860918272, 00:14:48.614 "num_read_ops": 210179, 00:14:48.614 "bytes_written": 0, 00:14:48.614 "num_write_ops": 0, 00:14:48.614 "bytes_unmapped": 0, 00:14:48.614 "num_unmap_ops": 0, 00:14:48.614 "bytes_copied": 0, 00:14:48.614 "num_copy_ops": 0, 00:14:48.614 "read_latency_ticks": 2161978430119, 00:14:48.614 "max_read_latency_ticks": 30361826, 00:14:48.614 "min_read_latency_ticks": 322078, 00:14:48.614 "write_latency_ticks": 0, 00:14:48.614 "max_write_latency_ticks": 0, 00:14:48.614 "min_write_latency_ticks": 0, 00:14:48.614 "unmap_latency_ticks": 0, 00:14:48.614 "max_unmap_latency_ticks": 0, 00:14:48.614 "min_unmap_latency_ticks": 0, 00:14:48.614 "copy_latency_ticks": 0, 00:14:48.614 "max_copy_latency_ticks": 0, 00:14:48.614 "min_copy_latency_ticks": 0, 00:14:48.614 "io_error": {}, 00:14:48.614 "queue_depth_polling_period": 10, 00:14:48.614 "queue_depth": 512, 00:14:48.614 "io_time": 20, 00:14:48.614 "weighted_io_time": 10240 00:14:48.614 } 00:14:48.614 ] 00:14:48.614 }' 00:14:48.614 00:34:21 -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:48.614 00:34:22 -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:14:48.614 00:34:22 -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:14:48.614 00:34:22 -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:14:48.614 00:34:22 -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:48.614 00:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.614 00:34:22 -- common/autotest_common.sh@10 -- # set +x 00:14:48.614 00:14:48.614 Latency(us) 00:14:48.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.614 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:48.614 Malloc_QD : 2.01 52660.05 205.70 0.00 0.00 4849.21 1563.93 13822.14 00:14:48.614 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:48.614 Malloc_QD : 2.01 54786.94 214.01 0.00 0.00 4661.58 983.04 8281.37 00:14:48.614 =================================================================================================================== 00:14:48.614 Total : 107446.99 419.71 0.00 0.00 4753.50 983.04 13822.14 00:14:48.614 0 00:14:48.614 00:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.614 00:34:22 -- bdev/blockdev.sh@554 -- # killprocess 118635 00:14:48.614 00:34:22 -- common/autotest_common.sh@936 -- # '[' -z 118635 ']' 00:14:48.614 00:34:22 -- common/autotest_common.sh@940 -- # kill -0 118635 00:14:48.614 00:34:22 -- common/autotest_common.sh@941 -- # uname 00:14:48.614 00:34:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.614 00:34:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118635 00:14:48.614 00:34:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.614 00:34:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.614 00:34:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118635' 00:14:48.614 killing process with pid 118635 00:14:48.614 00:34:22 -- common/autotest_common.sh@955 -- # kill 118635 00:14:48.614 Received shutdown signal, test time was about 2.143135 seconds 00:14:48.614 00:14:48.614 Latency(us) 00:14:48.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.614 =================================================================================================================== 00:14:48.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.614 00:34:22 -- common/autotest_common.sh@960 -- # wait 118635 00:14:49.990 00:34:23 -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:14:49.990 00:14:49.990 real 0m4.625s 00:14:49.990 user 0m8.502s 00:14:49.990 sys 0m0.397s 00:14:49.990 00:34:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:49.990 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:14:49.990 ************************************ 00:14:49.990 END TEST bdev_qd_sampling 00:14:49.990 ************************************ 00:14:49.990 00:34:23 -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:14:49.990 00:34:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:49.990 00:34:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:49.990 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:14:49.990 ************************************ 00:14:49.990 START TEST bdev_error 00:14:49.990 ************************************ 00:14:49.990 00:34:23 -- common/autotest_common.sh@1111 -- # error_test_suite '' 00:14:49.990 00:34:23 -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:14:49.990 00:34:23 -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:14:49.990 00:34:23 -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:14:49.990 00:34:23 -- bdev/blockdev.sh@472 -- # ERR_PID=118734 00:14:49.990 Process error testing pid: 118734 00:14:49.990 00:34:23 -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 118734' 00:14:49.990 00:34:23 -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:49.990 00:34:23 -- bdev/blockdev.sh@474 -- # waitforlisten 118734 00:14:49.990 00:34:23 -- common/autotest_common.sh@817 -- # '[' -z 118734 ']' 00:14:49.990 00:34:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.990 00:34:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.990 00:34:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.990 00:34:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.990 00:34:23 -- common/autotest_common.sh@10 -- # set +x 00:14:50.249 [2024-04-27 00:34:23.600006] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:50.249 [2024-04-27 00:34:23.600206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118734 ] 00:14:50.249 [2024-04-27 00:34:23.769997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.507 [2024-04-27 00:34:23.959810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:51.075 00:34:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:51.075 00:34:24 -- common/autotest_common.sh@850 -- # return 0 00:14:51.075 00:34:24 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:51.075 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.075 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.075 Dev_1 00:14:51.075 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.075 00:34:24 -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:14:51.075 00:34:24 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:14:51.075 00:34:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:51.075 00:34:24 -- common/autotest_common.sh@887 -- # local i 00:14:51.075 00:34:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:51.075 00:34:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:51.075 00:34:24 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:14:51.075 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.075 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.075 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.075 00:34:24 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:51.075 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.075 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.075 [ 00:14:51.075 { 00:14:51.075 "name": "Dev_1", 00:14:51.075 "aliases": [ 00:14:51.075 "5af9773a-6f9c-4b15-93bc-f3b0a92c4c54" 00:14:51.075 ], 00:14:51.075 "product_name": "Malloc disk", 00:14:51.075 "block_size": 512, 00:14:51.075 "num_blocks": 262144, 00:14:51.075 "uuid": "5af9773a-6f9c-4b15-93bc-f3b0a92c4c54", 00:14:51.075 "assigned_rate_limits": { 00:14:51.075 "rw_ios_per_sec": 0, 00:14:51.075 "rw_mbytes_per_sec": 0, 00:14:51.075 "r_mbytes_per_sec": 0, 00:14:51.075 "w_mbytes_per_sec": 0 00:14:51.075 }, 00:14:51.075 "claimed": false, 00:14:51.075 "zoned": false, 00:14:51.075 "supported_io_types": { 00:14:51.075 "read": true, 00:14:51.075 "write": true, 00:14:51.075 "unmap": true, 00:14:51.075 "write_zeroes": true, 00:14:51.075 "flush": true, 00:14:51.075 "reset": true, 00:14:51.075 "compare": false, 00:14:51.075 "compare_and_write": false, 00:14:51.075 "abort": true, 00:14:51.075 "nvme_admin": false, 00:14:51.075 "nvme_io": false 00:14:51.075 }, 00:14:51.075 "memory_domains": [ 00:14:51.075 { 00:14:51.075 "dma_device_id": "system", 00:14:51.075 "dma_device_type": 1 00:14:51.075 }, 00:14:51.075 { 00:14:51.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.075 "dma_device_type": 2 00:14:51.075 } 00:14:51.075 ], 00:14:51.075 "driver_specific": {} 00:14:51.075 } 00:14:51.075 ] 00:14:51.075 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.075 00:34:24 -- common/autotest_common.sh@893 -- # return 0 00:14:51.075 00:34:24 -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:14:51.075 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.075 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.075 true 00:14:51.075 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.075 00:34:24 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:51.075 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.075 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.333 Dev_2 00:14:51.333 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.333 00:34:24 -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:14:51.333 00:34:24 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:14:51.333 00:34:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:51.333 00:34:24 -- common/autotest_common.sh@887 -- # local i 00:14:51.333 00:34:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:51.333 00:34:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:51.333 00:34:24 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:14:51.333 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.333 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.333 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.333 00:34:24 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:51.333 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.333 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.333 [ 00:14:51.333 { 00:14:51.333 "name": "Dev_2", 00:14:51.333 "aliases": [ 00:14:51.333 "8dc66657-dd99-4770-b3f9-1f881afc6040" 00:14:51.333 ], 00:14:51.333 "product_name": "Malloc disk", 00:14:51.333 "block_size": 512, 00:14:51.333 "num_blocks": 262144, 00:14:51.333 "uuid": "8dc66657-dd99-4770-b3f9-1f881afc6040", 00:14:51.333 "assigned_rate_limits": { 00:14:51.333 "rw_ios_per_sec": 0, 00:14:51.333 "rw_mbytes_per_sec": 0, 00:14:51.333 "r_mbytes_per_sec": 0, 00:14:51.333 "w_mbytes_per_sec": 0 00:14:51.333 }, 00:14:51.333 "claimed": false, 00:14:51.333 "zoned": false, 00:14:51.333 "supported_io_types": { 00:14:51.333 "read": true, 00:14:51.333 "write": true, 00:14:51.333 "unmap": true, 00:14:51.333 "write_zeroes": true, 00:14:51.333 "flush": true, 00:14:51.333 "reset": true, 00:14:51.333 "compare": false, 00:14:51.333 "compare_and_write": false, 00:14:51.333 "abort": true, 00:14:51.333 "nvme_admin": false, 00:14:51.333 "nvme_io": false 00:14:51.333 }, 00:14:51.333 "memory_domains": [ 00:14:51.334 { 00:14:51.334 "dma_device_id": "system", 00:14:51.334 "dma_device_type": 1 00:14:51.334 }, 00:14:51.334 { 00:14:51.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.334 "dma_device_type": 2 00:14:51.334 } 00:14:51.334 ], 00:14:51.334 "driver_specific": {} 00:14:51.334 } 00:14:51.334 ] 00:14:51.334 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.334 00:34:24 -- common/autotest_common.sh@893 -- # return 0 00:14:51.334 00:34:24 -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:51.334 00:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.334 00:34:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.334 00:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.334 00:34:24 -- bdev/blockdev.sh@484 -- # sleep 1 00:14:51.334 00:34:24 -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:51.591 Running I/O for 5 seconds... 00:14:52.529 00:34:25 -- bdev/blockdev.sh@487 -- # kill -0 118734 00:14:52.529 00:34:25 -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 118734' 00:14:52.529 Process is existed as continue on error is set. Pid: 118734 00:14:52.529 00:34:25 -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:52.529 00:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.529 00:34:25 -- common/autotest_common.sh@10 -- # set +x 00:14:52.529 00:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.529 00:34:25 -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:52.529 00:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.529 00:34:25 -- common/autotest_common.sh@10 -- # set +x 00:14:52.529 Timeout while waiting for response: 00:14:52.529 00:14:52.529 00:14:52.529 00:34:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.529 00:34:26 -- bdev/blockdev.sh@497 -- # sleep 5 00:14:56.720 00:14:56.720 Latency(us) 00:14:56.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.720 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:56.720 EE_Dev_1 : 0.90 41910.09 163.71 5.55 0.00 379.00 173.15 830.37 00:14:56.720 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:56.720 Dev_2 : 5.00 88527.98 345.81 0.00 0.00 178.04 52.60 274536.26 00:14:56.720 =================================================================================================================== 00:14:56.720 Total : 130438.07 509.52 5.55 0.00 193.83 52.60 274536.26 00:14:57.657 00:34:31 -- bdev/blockdev.sh@499 -- # killprocess 118734 00:14:57.657 00:34:31 -- common/autotest_common.sh@936 -- # '[' -z 118734 ']' 00:14:57.657 00:34:31 -- common/autotest_common.sh@940 -- # kill -0 118734 00:14:57.657 00:34:31 -- common/autotest_common.sh@941 -- # uname 00:14:57.657 00:34:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.657 00:34:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118734 00:14:57.657 killing process with pid 118734 00:14:57.657 Received shutdown signal, test time was about 5.000000 seconds 00:14:57.657 00:14:57.657 Latency(us) 00:14:57.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.657 =================================================================================================================== 00:14:57.657 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.657 00:34:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:57.657 00:34:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:57.657 00:34:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118734' 00:14:57.657 00:34:31 -- common/autotest_common.sh@955 -- # kill 118734 00:14:57.657 00:34:31 -- common/autotest_common.sh@960 -- # wait 118734 00:14:59.032 00:34:32 -- bdev/blockdev.sh@503 -- # ERR_PID=118844 00:14:59.032 00:34:32 -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:59.032 Process error testing pid: 118844 00:14:59.032 00:34:32 -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 118844' 00:14:59.032 00:34:32 -- bdev/blockdev.sh@505 -- # waitforlisten 118844 00:14:59.032 00:34:32 -- common/autotest_common.sh@817 -- # '[' -z 118844 ']' 00:14:59.033 00:34:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.033 00:34:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:59.033 00:34:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.033 00:34:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:59.033 00:34:32 -- common/autotest_common.sh@10 -- # set +x 00:14:59.033 [2024-04-27 00:34:32.439827] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:14:59.033 [2024-04-27 00:34:32.440015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118844 ] 00:14:59.033 [2024-04-27 00:34:32.607737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.291 [2024-04-27 00:34:32.788889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.860 00:34:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:59.860 00:34:33 -- common/autotest_common.sh@850 -- # return 0 00:14:59.860 00:34:33 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:59.860 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.860 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 Dev_1 00:15:00.126 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.126 00:34:33 -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:15:00.126 00:34:33 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:15:00.126 00:34:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:00.126 00:34:33 -- common/autotest_common.sh@887 -- # local i 00:15:00.126 00:34:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:00.126 00:34:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:00.126 00:34:33 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:00.126 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.126 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.126 00:34:33 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:15:00.126 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.126 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 [ 00:15:00.126 { 00:15:00.126 "name": "Dev_1", 00:15:00.126 "aliases": [ 00:15:00.126 "f9df6a93-bb6f-42f1-9fda-9b519fb44f6c" 00:15:00.126 ], 00:15:00.126 "product_name": "Malloc disk", 00:15:00.126 "block_size": 512, 00:15:00.126 "num_blocks": 262144, 00:15:00.126 "uuid": "f9df6a93-bb6f-42f1-9fda-9b519fb44f6c", 00:15:00.126 "assigned_rate_limits": { 00:15:00.126 "rw_ios_per_sec": 0, 00:15:00.126 "rw_mbytes_per_sec": 0, 00:15:00.126 "r_mbytes_per_sec": 0, 00:15:00.126 "w_mbytes_per_sec": 0 00:15:00.126 }, 00:15:00.126 "claimed": false, 00:15:00.126 "zoned": false, 00:15:00.126 "supported_io_types": { 00:15:00.126 "read": true, 00:15:00.126 "write": true, 00:15:00.126 "unmap": true, 00:15:00.126 "write_zeroes": true, 00:15:00.126 "flush": true, 00:15:00.126 "reset": true, 00:15:00.126 "compare": false, 00:15:00.126 "compare_and_write": false, 00:15:00.126 "abort": true, 00:15:00.126 "nvme_admin": false, 00:15:00.126 "nvme_io": false 00:15:00.126 }, 00:15:00.126 "memory_domains": [ 00:15:00.126 { 00:15:00.126 "dma_device_id": "system", 00:15:00.126 "dma_device_type": 1 00:15:00.126 }, 00:15:00.126 { 00:15:00.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.126 "dma_device_type": 2 00:15:00.126 } 00:15:00.126 ], 00:15:00.126 "driver_specific": {} 00:15:00.126 } 00:15:00.126 ] 00:15:00.126 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.126 00:34:33 -- common/autotest_common.sh@893 -- # return 0 00:15:00.126 00:34:33 -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:15:00.126 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.126 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 true 00:15:00.126 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.126 00:34:33 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:15:00.126 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.126 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 Dev_2 00:15:00.126 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.126 00:34:33 -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:15:00.126 00:34:33 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:15:00.126 00:34:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:00.126 00:34:33 -- common/autotest_common.sh@887 -- # local i 00:15:00.126 00:34:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:00.126 00:34:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:00.126 00:34:33 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:00.126 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.126 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.126 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.126 00:34:33 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:15:00.126 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.126 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 [ 00:15:00.400 { 00:15:00.400 "name": "Dev_2", 00:15:00.400 "aliases": [ 00:15:00.400 "499f0ae5-cf93-4f60-864c-ee4be7f2e03d" 00:15:00.400 ], 00:15:00.400 "product_name": "Malloc disk", 00:15:00.400 "block_size": 512, 00:15:00.400 "num_blocks": 262144, 00:15:00.400 "uuid": "499f0ae5-cf93-4f60-864c-ee4be7f2e03d", 00:15:00.400 "assigned_rate_limits": { 00:15:00.400 "rw_ios_per_sec": 0, 00:15:00.400 "rw_mbytes_per_sec": 0, 00:15:00.400 "r_mbytes_per_sec": 0, 00:15:00.400 "w_mbytes_per_sec": 0 00:15:00.400 }, 00:15:00.400 "claimed": false, 00:15:00.400 "zoned": false, 00:15:00.400 "supported_io_types": { 00:15:00.400 "read": true, 00:15:00.400 "write": true, 00:15:00.400 "unmap": true, 00:15:00.400 "write_zeroes": true, 00:15:00.400 "flush": true, 00:15:00.400 "reset": true, 00:15:00.400 "compare": false, 00:15:00.400 "compare_and_write": false, 00:15:00.400 "abort": true, 00:15:00.400 "nvme_admin": false, 00:15:00.400 "nvme_io": false 00:15:00.400 }, 00:15:00.400 "memory_domains": [ 00:15:00.400 { 00:15:00.400 "dma_device_id": "system", 00:15:00.400 "dma_device_type": 1 00:15:00.400 }, 00:15:00.400 { 00:15:00.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.400 "dma_device_type": 2 00:15:00.400 } 00:15:00.400 ], 00:15:00.400 "driver_specific": {} 00:15:00.400 } 00:15:00.400 ] 00:15:00.400 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.400 00:34:33 -- common/autotest_common.sh@893 -- # return 0 00:15:00.400 00:34:33 -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:15:00.400 00:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.400 00:34:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 00:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.400 00:34:33 -- bdev/blockdev.sh@515 -- # NOT wait 118844 00:15:00.400 00:34:33 -- common/autotest_common.sh@638 -- # local es=0 00:15:00.400 00:34:33 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 118844 00:15:00.400 00:34:33 -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:15:00.400 00:34:33 -- common/autotest_common.sh@626 -- # local arg=wait 00:15:00.400 00:34:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.400 00:34:33 -- common/autotest_common.sh@630 -- # type -t wait 00:15:00.400 00:34:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.400 00:34:33 -- common/autotest_common.sh@641 -- # wait 118844 00:15:00.400 Running I/O for 5 seconds... 00:15:00.400 task offset: 167720 on job bdev=EE_Dev_1 fails 00:15:00.400 00:15:00.400 Latency(us) 00:15:00.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.400 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:00.400 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:15:00.400 EE_Dev_1 : 0.00 23861.17 93.21 5422.99 0.00 450.68 189.91 826.65 00:15:00.400 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:15:00.400 Dev_2 : 0.00 16958.13 66.24 0.00 0.00 665.73 153.60 1221.35 00:15:00.400 =================================================================================================================== 00:15:00.400 Total : 40819.31 159.45 5422.99 0.00 567.32 153.60 1221.35 00:15:00.400 [2024-04-27 00:34:33.840714] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:00.400 request: 00:15:00.400 { 00:15:00.400 "method": "perform_tests", 00:15:00.400 "req_id": 1 00:15:00.400 } 00:15:00.400 Got JSON-RPC error response 00:15:00.400 response: 00:15:00.400 { 00:15:00.400 "code": -32603, 00:15:00.400 "message": "bdevperf failed with error Operation not permitted" 00:15:00.400 } 00:15:02.302 00:34:35 -- common/autotest_common.sh@641 -- # es=255 00:15:02.302 00:34:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.302 00:34:35 -- common/autotest_common.sh@650 -- # es=127 00:15:02.302 00:34:35 -- common/autotest_common.sh@651 -- # case "$es" in 00:15:02.302 00:34:35 -- common/autotest_common.sh@658 -- # es=1 00:15:02.302 00:34:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.302 00:15:02.302 real 0m11.835s 00:15:02.302 user 0m12.004s 00:15:02.302 sys 0m0.876s 00:15:02.302 00:34:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.302 00:34:35 -- common/autotest_common.sh@10 -- # set +x 00:15:02.302 ************************************ 00:15:02.302 END TEST bdev_error 00:15:02.302 ************************************ 00:15:02.302 00:34:35 -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:15:02.302 00:34:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.302 00:34:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.302 00:34:35 -- common/autotest_common.sh@10 -- # set +x 00:15:02.303 ************************************ 00:15:02.303 START TEST bdev_stat 00:15:02.303 ************************************ 00:15:02.303 00:34:35 -- common/autotest_common.sh@1111 -- # stat_test_suite '' 00:15:02.303 00:34:35 -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:15:02.303 00:34:35 -- bdev/blockdev.sh@596 -- # STAT_PID=118911 00:15:02.303 Process Bdev IO statistics testing pid: 118911 00:15:02.303 00:34:35 -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 118911' 00:15:02.303 00:34:35 -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:15:02.303 00:34:35 -- bdev/blockdev.sh@599 -- # waitforlisten 118911 00:15:02.303 00:34:35 -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:15:02.303 00:34:35 -- common/autotest_common.sh@817 -- # '[' -z 118911 ']' 00:15:02.303 00:34:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.303 00:34:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:02.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.303 00:34:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.303 00:34:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:02.303 00:34:35 -- common/autotest_common.sh@10 -- # set +x 00:15:02.303 [2024-04-27 00:34:35.523726] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:02.303 [2024-04-27 00:34:35.523937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118911 ] 00:15:02.303 [2024-04-27 00:34:35.696719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:02.561 [2024-04-27 00:34:35.895220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.561 [2024-04-27 00:34:35.895227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.128 00:34:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:03.128 00:34:36 -- common/autotest_common.sh@850 -- # return 0 00:15:03.128 00:34:36 -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:15:03.128 00:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.128 00:34:36 -- common/autotest_common.sh@10 -- # set +x 00:15:03.128 Malloc_STAT 00:15:03.128 00:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.128 00:34:36 -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:15:03.128 00:34:36 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_STAT 00:15:03.128 00:34:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:03.128 00:34:36 -- common/autotest_common.sh@887 -- # local i 00:15:03.128 00:34:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:03.128 00:34:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:03.128 00:34:36 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:15:03.128 00:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.128 00:34:36 -- common/autotest_common.sh@10 -- # set +x 00:15:03.128 00:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.128 00:34:36 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:15:03.128 00:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.128 00:34:36 -- common/autotest_common.sh@10 -- # set +x 00:15:03.128 [ 00:15:03.128 { 00:15:03.128 "name": "Malloc_STAT", 00:15:03.128 "aliases": [ 00:15:03.128 "1e02af1e-e45d-4e24-982f-22829665d3e3" 00:15:03.128 ], 00:15:03.128 "product_name": "Malloc disk", 00:15:03.128 "block_size": 512, 00:15:03.128 "num_blocks": 262144, 00:15:03.128 "uuid": "1e02af1e-e45d-4e24-982f-22829665d3e3", 00:15:03.128 "assigned_rate_limits": { 00:15:03.128 "rw_ios_per_sec": 0, 00:15:03.128 "rw_mbytes_per_sec": 0, 00:15:03.128 "r_mbytes_per_sec": 0, 00:15:03.128 "w_mbytes_per_sec": 0 00:15:03.128 }, 00:15:03.128 "claimed": false, 00:15:03.128 "zoned": false, 00:15:03.128 "supported_io_types": { 00:15:03.128 "read": true, 00:15:03.128 "write": true, 00:15:03.128 "unmap": true, 00:15:03.128 "write_zeroes": true, 00:15:03.128 "flush": true, 00:15:03.128 "reset": true, 00:15:03.128 "compare": false, 00:15:03.128 "compare_and_write": false, 00:15:03.128 "abort": true, 00:15:03.128 "nvme_admin": false, 00:15:03.128 "nvme_io": false 00:15:03.128 }, 00:15:03.128 "memory_domains": [ 00:15:03.128 { 00:15:03.128 "dma_device_id": "system", 00:15:03.128 "dma_device_type": 1 00:15:03.128 }, 00:15:03.128 { 00:15:03.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.129 "dma_device_type": 2 00:15:03.129 } 00:15:03.129 ], 00:15:03.129 "driver_specific": {} 00:15:03.129 } 00:15:03.129 ] 00:15:03.129 00:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:03.129 00:34:36 -- common/autotest_common.sh@893 -- # return 0 00:15:03.129 00:34:36 -- bdev/blockdev.sh@605 -- # sleep 2 00:15:03.129 00:34:36 -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:03.129 Running I/O for 10 seconds... 00:15:05.031 00:34:38 -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:15:05.031 00:34:38 -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:15:05.031 00:34:38 -- bdev/blockdev.sh@560 -- # local iostats 00:15:05.031 00:34:38 -- bdev/blockdev.sh@561 -- # local io_count1 00:15:05.031 00:34:38 -- bdev/blockdev.sh@562 -- # local io_count2 00:15:05.031 00:34:38 -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:15:05.031 00:34:38 -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:15:05.031 00:34:38 -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:15:05.031 00:34:38 -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:15:05.031 00:34:38 -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:05.031 00:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.031 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:15:05.031 00:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.031 00:34:38 -- bdev/blockdev.sh@568 -- # iostats='{ 00:15:05.031 "tick_rate": 2200000000, 00:15:05.032 "ticks": 1709078020689, 00:15:05.032 "bdevs": [ 00:15:05.032 { 00:15:05.032 "name": "Malloc_STAT", 00:15:05.032 "bytes_read": 928027136, 00:15:05.032 "num_read_ops": 226563, 00:15:05.032 "bytes_written": 0, 00:15:05.032 "num_write_ops": 0, 00:15:05.032 "bytes_unmapped": 0, 00:15:05.032 "num_unmap_ops": 0, 00:15:05.032 "bytes_copied": 0, 00:15:05.032 "num_copy_ops": 0, 00:15:05.032 "read_latency_ticks": 2160024077362, 00:15:05.032 "max_read_latency_ticks": 11437448, 00:15:05.032 "min_read_latency_ticks": 286196, 00:15:05.032 "write_latency_ticks": 0, 00:15:05.032 "max_write_latency_ticks": 0, 00:15:05.032 "min_write_latency_ticks": 0, 00:15:05.032 "unmap_latency_ticks": 0, 00:15:05.032 "max_unmap_latency_ticks": 0, 00:15:05.032 "min_unmap_latency_ticks": 0, 00:15:05.032 "copy_latency_ticks": 0, 00:15:05.032 "max_copy_latency_ticks": 0, 00:15:05.032 "min_copy_latency_ticks": 0, 00:15:05.032 "io_error": {} 00:15:05.032 } 00:15:05.032 ] 00:15:05.032 }' 00:15:05.032 00:34:38 -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@569 -- # io_count1=226563 00:15:05.291 00:34:38 -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:15:05.291 00:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.291 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:15:05.291 00:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.291 00:34:38 -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:15:05.291 "tick_rate": 2200000000, 00:15:05.291 "ticks": 1709221487326, 00:15:05.291 "name": "Malloc_STAT", 00:15:05.291 "channels": [ 00:15:05.291 { 00:15:05.291 "thread_id": 2, 00:15:05.291 "bytes_read": 478150656, 00:15:05.291 "num_read_ops": 116736, 00:15:05.291 "bytes_written": 0, 00:15:05.291 "num_write_ops": 0, 00:15:05.291 "bytes_unmapped": 0, 00:15:05.291 "num_unmap_ops": 0, 00:15:05.291 "bytes_copied": 0, 00:15:05.291 "num_copy_ops": 0, 00:15:05.291 "read_latency_ticks": 1116050067890, 00:15:05.291 "max_read_latency_ticks": 11437448, 00:15:05.291 "min_read_latency_ticks": 7779484, 00:15:05.291 "write_latency_ticks": 0, 00:15:05.291 "max_write_latency_ticks": 0, 00:15:05.291 "min_write_latency_ticks": 0, 00:15:05.291 "unmap_latency_ticks": 0, 00:15:05.291 "max_unmap_latency_ticks": 0, 00:15:05.291 "min_unmap_latency_ticks": 0, 00:15:05.291 "copy_latency_ticks": 0, 00:15:05.291 "max_copy_latency_ticks": 0, 00:15:05.291 "min_copy_latency_ticks": 0 00:15:05.291 }, 00:15:05.291 { 00:15:05.291 "thread_id": 3, 00:15:05.291 "bytes_read": 479199232, 00:15:05.291 "num_read_ops": 116992, 00:15:05.291 "bytes_written": 0, 00:15:05.291 "num_write_ops": 0, 00:15:05.291 "bytes_unmapped": 0, 00:15:05.291 "num_unmap_ops": 0, 00:15:05.291 "bytes_copied": 0, 00:15:05.291 "num_copy_ops": 0, 00:15:05.291 "read_latency_ticks": 1117337050446, 00:15:05.291 "max_read_latency_ticks": 11212012, 00:15:05.291 "min_read_latency_ticks": 7759300, 00:15:05.291 "write_latency_ticks": 0, 00:15:05.291 "max_write_latency_ticks": 0, 00:15:05.291 "min_write_latency_ticks": 0, 00:15:05.291 "unmap_latency_ticks": 0, 00:15:05.291 "max_unmap_latency_ticks": 0, 00:15:05.291 "min_unmap_latency_ticks": 0, 00:15:05.291 "copy_latency_ticks": 0, 00:15:05.291 "max_copy_latency_ticks": 0, 00:15:05.291 "min_copy_latency_ticks": 0 00:15:05.291 } 00:15:05.291 ] 00:15:05.291 }' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@572 -- # io_count_per_channel1=116736 00:15:05.291 00:34:38 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=116736 00:15:05.291 00:34:38 -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@574 -- # io_count_per_channel2=116992 00:15:05.291 00:34:38 -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=233728 00:15:05.291 00:34:38 -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:15:05.291 00:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.291 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:15:05.291 00:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.291 00:34:38 -- bdev/blockdev.sh@577 -- # iostats='{ 00:15:05.291 "tick_rate": 2200000000, 00:15:05.291 "ticks": 1709488422500, 00:15:05.291 "bdevs": [ 00:15:05.291 { 00:15:05.291 "name": "Malloc_STAT", 00:15:05.291 "bytes_read": 1014010368, 00:15:05.291 "num_read_ops": 247555, 00:15:05.291 "bytes_written": 0, 00:15:05.291 "num_write_ops": 0, 00:15:05.291 "bytes_unmapped": 0, 00:15:05.291 "num_unmap_ops": 0, 00:15:05.291 "bytes_copied": 0, 00:15:05.291 "num_copy_ops": 0, 00:15:05.291 "read_latency_ticks": 2370680173277, 00:15:05.291 "max_read_latency_ticks": 11437448, 00:15:05.291 "min_read_latency_ticks": 286196, 00:15:05.291 "write_latency_ticks": 0, 00:15:05.291 "max_write_latency_ticks": 0, 00:15:05.291 "min_write_latency_ticks": 0, 00:15:05.291 "unmap_latency_ticks": 0, 00:15:05.291 "max_unmap_latency_ticks": 0, 00:15:05.291 "min_unmap_latency_ticks": 0, 00:15:05.291 "copy_latency_ticks": 0, 00:15:05.291 "max_copy_latency_ticks": 0, 00:15:05.291 "min_copy_latency_ticks": 0, 00:15:05.291 "io_error": {} 00:15:05.291 } 00:15:05.291 ] 00:15:05.291 }' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@578 -- # io_count2=247555 00:15:05.291 00:34:38 -- bdev/blockdev.sh@583 -- # '[' 233728 -lt 226563 ']' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@583 -- # '[' 233728 -gt 247555 ']' 00:15:05.291 00:34:38 -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:15:05.291 00:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:05.291 00:34:38 -- common/autotest_common.sh@10 -- # set +x 00:15:05.291 00:15:05.291 Latency(us) 00:15:05.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.291 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:15:05.291 Malloc_STAT : 2.18 58629.91 229.02 0.00 0.00 4356.43 1027.72 5213.09 00:15:05.291 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:15:05.291 Malloc_STAT : 2.18 58708.85 229.33 0.00 0.00 4350.76 741.00 5123.72 00:15:05.291 =================================================================================================================== 00:15:05.291 Total : 117338.76 458.35 0.00 0.00 4353.60 741.00 5213.09 00:15:05.550 0 00:15:05.550 00:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:05.550 00:34:38 -- bdev/blockdev.sh@609 -- # killprocess 118911 00:15:05.550 00:34:38 -- common/autotest_common.sh@936 -- # '[' -z 118911 ']' 00:15:05.550 00:34:38 -- common/autotest_common.sh@940 -- # kill -0 118911 00:15:05.550 00:34:38 -- common/autotest_common.sh@941 -- # uname 00:15:05.550 00:34:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.550 00:34:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 118911 00:15:05.550 00:34:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:05.550 00:34:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:05.550 killing process with pid 118911 00:15:05.550 00:34:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 118911' 00:15:05.550 00:34:38 -- common/autotest_common.sh@955 -- # kill 118911 00:15:05.550 Received shutdown signal, test time was about 2.317483 seconds 00:15:05.550 00:15:05.550 Latency(us) 00:15:05.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.550 =================================================================================================================== 00:15:05.550 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.550 00:34:38 -- common/autotest_common.sh@960 -- # wait 118911 00:15:06.926 00:34:40 -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:15:06.926 00:15:06.926 real 0m4.726s 00:15:06.926 user 0m8.935s 00:15:06.926 sys 0m0.436s 00:15:06.926 00:34:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.926 00:34:40 -- common/autotest_common.sh@10 -- # set +x 00:15:06.926 ************************************ 00:15:06.926 END TEST bdev_stat 00:15:06.926 ************************************ 00:15:06.926 00:34:40 -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:15:06.926 00:34:40 -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:15:06.926 00:34:40 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:15:06.926 00:34:40 -- bdev/blockdev.sh@811 -- # cleanup 00:15:06.926 00:34:40 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:06.926 00:34:40 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:06.926 00:34:40 -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:15:06.926 00:34:40 -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:15:06.926 00:34:40 -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:15:06.926 00:34:40 -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:15:06.926 00:15:06.926 real 2m21.742s 00:15:06.926 user 5m48.220s 00:15:06.926 sys 0m21.406s 00:15:06.926 00:34:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.926 00:34:40 -- common/autotest_common.sh@10 -- # set +x 00:15:06.926 ************************************ 00:15:06.926 END TEST blockdev_general 00:15:06.926 ************************************ 00:15:06.926 00:34:40 -- spdk/autotest.sh@186 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:06.926 00:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:06.926 00:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.926 00:34:40 -- common/autotest_common.sh@10 -- # set +x 00:15:06.926 ************************************ 00:15:06.926 START TEST bdev_raid 00:15:06.926 ************************************ 00:15:06.926 00:34:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:15:06.926 * Looking for test storage... 00:15:06.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:06.926 00:34:40 -- bdev/nbd_common.sh@6 -- # set -e 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@716 -- # uname -s 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:15:06.926 00:34:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:06.926 00:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.926 00:34:40 -- common/autotest_common.sh@10 -- # set +x 00:15:06.926 ************************************ 00:15:06.926 START TEST raid_function_test_raid0 00:15:06.926 ************************************ 00:15:06.926 00:34:40 -- common/autotest_common.sh@1111 -- # raid_function_test raid0 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@86 -- # raid_pid=119083 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 119083' 00:15:06.926 Process raid pid: 119083 00:15:06.926 00:34:40 -- bdev/bdev_raid.sh@88 -- # waitforlisten 119083 /var/tmp/spdk-raid.sock 00:15:06.926 00:34:40 -- common/autotest_common.sh@817 -- # '[' -z 119083 ']' 00:15:06.926 00:34:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.926 00:34:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:06.926 00:34:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.926 00:34:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:06.926 00:34:40 -- common/autotest_common.sh@10 -- # set +x 00:15:07.185 [2024-04-27 00:34:40.519659] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:07.185 [2024-04-27 00:34:40.520215] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.185 [2024-04-27 00:34:40.688897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.444 [2024-04-27 00:34:40.880387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.704 [2024-04-27 00:34:41.066232] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.963 00:34:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:07.963 00:34:41 -- common/autotest_common.sh@850 -- # return 0 00:15:07.963 00:34:41 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:15:07.963 00:34:41 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:15:07.963 00:34:41 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:07.963 00:34:41 -- bdev/bdev_raid.sh@70 -- # cat 00:15:07.963 00:34:41 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:08.542 [2024-04-27 00:34:41.813967] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:08.542 [2024-04-27 00:34:41.816414] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:08.542 [2024-04-27 00:34:41.816665] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:15:08.542 [2024-04-27 00:34:41.816796] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:08.542 [2024-04-27 00:34:41.816986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:08.542 [2024-04-27 00:34:41.817427] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:15:08.542 [2024-04-27 00:34:41.817577] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000010e00 00:15:08.542 [2024-04-27 00:34:41.817897] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.542 Base_1 00:15:08.542 Base_2 00:15:08.542 00:34:41 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:08.542 00:34:41 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:08.542 00:34:41 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:15:08.542 00:34:42 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:15:08.542 00:34:42 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:15:08.542 00:34:42 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@12 -- # local i 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:08.542 00:34:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:08.814 [2024-04-27 00:34:42.342281] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:08.814 /dev/nbd0 00:15:08.814 00:34:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:08.814 00:34:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:08.814 00:34:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:08.814 00:34:42 -- common/autotest_common.sh@855 -- # local i 00:15:08.814 00:34:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:08.814 00:34:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:08.814 00:34:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:08.814 00:34:42 -- common/autotest_common.sh@859 -- # break 00:15:08.814 00:34:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:08.814 00:34:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:08.814 00:34:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:08.814 1+0 records in 00:15:08.814 1+0 records out 00:15:08.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345621 s, 11.9 MB/s 00:15:08.814 00:34:42 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:08.814 00:34:42 -- common/autotest_common.sh@872 -- # size=4096 00:15:08.814 00:34:42 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.073 00:34:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:09.073 00:34:42 -- common/autotest_common.sh@875 -- # return 0 00:15:09.073 00:34:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.073 00:34:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.073 00:34:42 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:09.073 00:34:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:09.073 00:34:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:09.332 { 00:15:09.332 "nbd_device": "/dev/nbd0", 00:15:09.332 "bdev_name": "raid" 00:15:09.332 } 00:15:09.332 ]' 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:09.332 { 00:15:09.332 "nbd_device": "/dev/nbd0", 00:15:09.332 "bdev_name": "raid" 00:15:09.332 } 00:15:09.332 ]' 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@65 -- # count=1 00:15:09.332 00:34:42 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@98 -- # count=1 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@20 -- # local blksize 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:15:09.332 4096+0 records in 00:15:09.332 4096+0 records out 00:15:09.332 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0269889 s, 77.7 MB/s 00:15:09.332 00:34:42 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:09.589 4096+0 records in 00:15:09.589 4096+0 records out 00:15:09.589 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.31489 s, 6.7 MB/s 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:09.589 128+0 records in 00:15:09.589 128+0 records out 00:15:09.589 65536 bytes (66 kB, 64 KiB) copied, 0.000811059 s, 80.8 MB/s 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:09.589 2035+0 records in 00:15:09.589 2035+0 records out 00:15:09.589 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00822647 s, 127 MB/s 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:09.589 456+0 records in 00:15:09.589 456+0 records out 00:15:09.589 233472 bytes (233 kB, 228 KiB) copied, 0.00170646 s, 137 MB/s 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@53 -- # return 0 00:15:09.589 00:34:43 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:09.589 00:34:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:09.589 00:34:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:09.589 00:34:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.589 00:34:43 -- bdev/nbd_common.sh@51 -- # local i 00:15:09.589 00:34:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.589 00:34:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:09.846 00:34:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:09.846 [2024-04-27 00:34:43.428994] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.846 00:34:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:09.846 00:34:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:09.846 00:34:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:09.846 00:34:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:09.846 00:34:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:10.104 00:34:43 -- bdev/nbd_common.sh@41 -- # break 00:15:10.104 00:34:43 -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.104 00:34:43 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:10.104 00:34:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:10.104 00:34:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@65 -- # true 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@65 -- # count=0 00:15:10.362 00:34:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:10.362 00:34:43 -- bdev/bdev_raid.sh@106 -- # count=0 00:15:10.362 00:34:43 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:15:10.362 00:34:43 -- bdev/bdev_raid.sh@111 -- # killprocess 119083 00:15:10.362 00:34:43 -- common/autotest_common.sh@936 -- # '[' -z 119083 ']' 00:15:10.362 00:34:43 -- common/autotest_common.sh@940 -- # kill -0 119083 00:15:10.362 00:34:43 -- common/autotest_common.sh@941 -- # uname 00:15:10.362 00:34:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.362 00:34:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119083 00:15:10.362 00:34:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:10.362 00:34:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:10.362 00:34:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119083' 00:15:10.362 killing process with pid 119083 00:15:10.362 00:34:43 -- common/autotest_common.sh@955 -- # kill 119083 00:15:10.362 [2024-04-27 00:34:43.796294] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.362 00:34:43 -- common/autotest_common.sh@960 -- # wait 119083 00:15:10.362 [2024-04-27 00:34:43.796580] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.362 [2024-04-27 00:34:43.796777] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.362 [2024-04-27 00:34:43.796894] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid, state offline 00:15:10.362 [2024-04-27 00:34:43.934700] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.738 00:34:44 -- bdev/bdev_raid.sh@113 -- # return 0 00:15:11.738 00:15:11.738 real 0m4.516s 00:15:11.738 user 0m5.892s 00:15:11.738 sys 0m0.932s 00:15:11.738 ************************************ 00:15:11.738 END TEST raid_function_test_raid0 00:15:11.738 ************************************ 00:15:11.738 00:34:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:11.738 00:34:44 -- common/autotest_common.sh@10 -- # set +x 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:15:11.738 00:34:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:11.738 00:34:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.738 00:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.738 ************************************ 00:15:11.738 START TEST raid_function_test_concat 00:15:11.738 ************************************ 00:15:11.738 00:34:45 -- common/autotest_common.sh@1111 -- # raid_function_test concat 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@86 -- # raid_pid=119250 00:15:11.738 Process raid pid: 119250 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 119250' 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:11.738 00:34:45 -- bdev/bdev_raid.sh@88 -- # waitforlisten 119250 /var/tmp/spdk-raid.sock 00:15:11.738 00:34:45 -- common/autotest_common.sh@817 -- # '[' -z 119250 ']' 00:15:11.738 00:34:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:11.738 00:34:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:11.738 00:34:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:11.738 00:34:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.738 00:34:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.738 [2024-04-27 00:34:45.117417] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:11.738 [2024-04-27 00:34:45.117621] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.738 [2024-04-27 00:34:45.287833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.996 [2024-04-27 00:34:45.520900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.254 [2024-04-27 00:34:45.708928] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.513 00:34:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.513 00:34:46 -- common/autotest_common.sh@850 -- # return 0 00:15:12.513 00:34:46 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:15:12.513 00:34:46 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:15:12.513 00:34:46 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:12.513 00:34:46 -- bdev/bdev_raid.sh@70 -- # cat 00:15:12.513 00:34:46 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:15:13.081 [2024-04-27 00:34:46.439472] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:13.081 [2024-04-27 00:34:46.441324] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:13.081 [2024-04-27 00:34:46.441408] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:15:13.081 [2024-04-27 00:34:46.441422] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:13.081 [2024-04-27 00:34:46.441548] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:13.081 [2024-04-27 00:34:46.441888] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:15:13.081 [2024-04-27 00:34:46.441902] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000010e00 00:15:13.081 [2024-04-27 00:34:46.442048] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.081 Base_1 00:15:13.081 Base_2 00:15:13.081 00:34:46 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:15:13.081 00:34:46 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:15:13.081 00:34:46 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:15:13.342 00:34:46 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:15:13.342 00:34:46 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:15:13.342 00:34:46 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@12 -- # local i 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.342 00:34:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:15:13.600 [2024-04-27 00:34:46.963826] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:13.600 /dev/nbd0 00:15:13.600 00:34:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.600 00:34:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.600 00:34:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:13.600 00:34:46 -- common/autotest_common.sh@855 -- # local i 00:15:13.600 00:34:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:13.600 00:34:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:13.600 00:34:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:13.600 00:34:47 -- common/autotest_common.sh@859 -- # break 00:15:13.600 00:34:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:13.600 00:34:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:13.600 00:34:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.600 1+0 records in 00:15:13.600 1+0 records out 00:15:13.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281905 s, 14.5 MB/s 00:15:13.600 00:34:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.600 00:34:47 -- common/autotest_common.sh@872 -- # size=4096 00:15:13.600 00:34:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.600 00:34:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:13.600 00:34:47 -- common/autotest_common.sh@875 -- # return 0 00:15:13.600 00:34:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.600 00:34:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.600 00:34:47 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:13.601 00:34:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:13.601 00:34:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:13.859 { 00:15:13.859 "nbd_device": "/dev/nbd0", 00:15:13.859 "bdev_name": "raid" 00:15:13.859 } 00:15:13.859 ]' 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:13.859 { 00:15:13.859 "nbd_device": "/dev/nbd0", 00:15:13.859 "bdev_name": "raid" 00:15:13.859 } 00:15:13.859 ]' 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@65 -- # count=1 00:15:13.859 00:34:47 -- bdev/nbd_common.sh@66 -- # echo 1 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@98 -- # count=1 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@20 -- # local blksize 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:15:13.859 4096+0 records in 00:15:13.859 4096+0 records out 00:15:13.859 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0266166 s, 78.8 MB/s 00:15:13.859 00:34:47 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:15:14.118 4096+0 records in 00:15:14.118 4096+0 records out 00:15:14.118 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.308837 s, 6.8 MB/s 00:15:14.118 00:34:47 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:15:14.118 00:34:47 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:15:14.376 128+0 records in 00:15:14.376 128+0 records out 00:15:14.376 65536 bytes (66 kB, 64 KiB) copied, 0.000750935 s, 87.3 MB/s 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:15:14.376 2035+0 records in 00:15:14.376 2035+0 records out 00:15:14.376 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00688914 s, 151 MB/s 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:15:14.376 456+0 records in 00:15:14.376 456+0 records out 00:15:14.376 233472 bytes (233 kB, 228 KiB) copied, 0.00213911 s, 109 MB/s 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@53 -- # return 0 00:15:14.376 00:34:47 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:14.376 00:34:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:14.376 00:34:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:14.376 00:34:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.376 00:34:47 -- bdev/nbd_common.sh@51 -- # local i 00:15:14.376 00:34:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.376 00:34:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.685 [2024-04-27 00:34:48.061435] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@41 -- # break 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.685 00:34:48 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:14.685 00:34:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@65 -- # true 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@65 -- # count=0 00:15:14.943 00:34:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:14.943 00:34:48 -- bdev/bdev_raid.sh@106 -- # count=0 00:15:14.943 00:34:48 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:15:14.943 00:34:48 -- bdev/bdev_raid.sh@111 -- # killprocess 119250 00:15:14.943 00:34:48 -- common/autotest_common.sh@936 -- # '[' -z 119250 ']' 00:15:14.943 00:34:48 -- common/autotest_common.sh@940 -- # kill -0 119250 00:15:14.943 00:34:48 -- common/autotest_common.sh@941 -- # uname 00:15:14.943 00:34:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.943 00:34:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119250 00:15:14.943 00:34:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.943 killing process with pid 119250 00:15:14.943 00:34:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.943 00:34:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119250' 00:15:14.943 00:34:48 -- common/autotest_common.sh@955 -- # kill 119250 00:15:14.943 [2024-04-27 00:34:48.413891] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.943 00:34:48 -- common/autotest_common.sh@960 -- # wait 119250 00:15:14.943 [2024-04-27 00:34:48.414032] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.943 [2024-04-27 00:34:48.414095] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.943 [2024-04-27 00:34:48.414110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid, state offline 00:15:15.201 [2024-04-27 00:34:48.580342] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@113 -- # return 0 00:15:16.135 00:15:16.135 real 0m4.544s 00:15:16.135 user 0m5.967s 00:15:16.135 sys 0m0.931s 00:15:16.135 00:34:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.135 00:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.135 ************************************ 00:15:16.135 END TEST raid_function_test_concat 00:15:16.135 ************************************ 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:15:16.135 00:34:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:16.135 00:34:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.135 00:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.135 ************************************ 00:15:16.135 START TEST raid0_resize_test 00:15:16.135 ************************************ 00:15:16.135 00:34:49 -- common/autotest_common.sh@1111 -- # raid0_resize_test 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@301 -- # raid_pid=119409 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:16.135 Process raid pid: 119409 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 119409' 00:15:16.135 00:34:49 -- bdev/bdev_raid.sh@303 -- # waitforlisten 119409 /var/tmp/spdk-raid.sock 00:15:16.135 00:34:49 -- common/autotest_common.sh@817 -- # '[' -z 119409 ']' 00:15:16.135 00:34:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:16.135 00:34:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:16.135 00:34:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:16.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:16.135 00:34:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:16.135 00:34:49 -- common/autotest_common.sh@10 -- # set +x 00:15:16.408 [2024-04-27 00:34:49.735425] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:16.408 [2024-04-27 00:34:49.735624] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.408 [2024-04-27 00:34:49.892273] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.686 [2024-04-27 00:34:50.097908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.944 [2024-04-27 00:34:50.283295] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.202 00:34:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:17.202 00:34:50 -- common/autotest_common.sh@850 -- # return 0 00:15:17.202 00:34:50 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:15:17.461 Base_1 00:15:17.461 00:34:50 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:15:17.719 Base_2 00:15:17.720 00:34:51 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:15:17.978 [2024-04-27 00:34:51.379135] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:15:17.978 [2024-04-27 00:34:51.381492] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:15:17.978 [2024-04-27 00:34:51.381586] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:15:17.978 [2024-04-27 00:34:51.381600] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:17.978 [2024-04-27 00:34:51.381798] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:15:17.978 [2024-04-27 00:34:51.382160] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:15:17.978 [2024-04-27 00:34:51.382175] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000010e00 00:15:17.978 [2024-04-27 00:34:51.382388] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.978 00:34:51 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:15:18.237 [2024-04-27 00:34:51.623147] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:18.237 [2024-04-27 00:34:51.623184] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:15:18.237 true 00:15:18.237 00:34:51 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:18.237 00:34:51 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:15:18.495 [2024-04-27 00:34:51.839286] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.495 00:34:51 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:15:18.495 00:34:51 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:15:18.495 00:34:51 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:15:18.495 00:34:51 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:15:18.495 [2024-04-27 00:34:52.047173] bdev_raid.c:2222:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:15:18.495 [2024-04-27 00:34:52.047207] bdev_raid.c:2235:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:15:18.495 [2024-04-27 00:34:52.047269] bdev_raid.c:2249:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:15:18.495 true 00:15:18.495 00:34:52 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:15:18.495 00:34:52 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:15:18.754 [2024-04-27 00:34:52.283359] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.754 00:34:52 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:15:18.754 00:34:52 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:15:18.754 00:34:52 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:15:18.754 00:34:52 -- bdev/bdev_raid.sh@332 -- # killprocess 119409 00:15:18.754 00:34:52 -- common/autotest_common.sh@936 -- # '[' -z 119409 ']' 00:15:18.754 00:34:52 -- common/autotest_common.sh@940 -- # kill -0 119409 00:15:18.754 00:34:52 -- common/autotest_common.sh@941 -- # uname 00:15:18.754 00:34:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.754 00:34:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119409 00:15:18.754 00:34:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.754 killing process with pid 119409 00:15:18.754 00:34:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.754 00:34:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119409' 00:15:18.754 00:34:52 -- common/autotest_common.sh@955 -- # kill 119409 00:15:18.754 [2024-04-27 00:34:52.318468] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.754 [2024-04-27 00:34:52.318560] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.754 00:34:52 -- common/autotest_common.sh@960 -- # wait 119409 00:15:18.754 [2024-04-27 00:34:52.318621] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.754 [2024-04-27 00:34:52.318634] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Raid, state offline 00:15:18.754 [2024-04-27 00:34:52.319273] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@334 -- # return 0 00:15:20.131 00:15:20.131 real 0m3.602s 00:15:20.131 user 0m5.216s 00:15:20.131 sys 0m0.467s 00:15:20.131 ************************************ 00:15:20.131 00:34:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:20.131 00:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.131 END TEST raid0_resize_test 00:15:20.131 ************************************ 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:20.131 00:34:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:20.131 00:34:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:20.131 00:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.131 ************************************ 00:15:20.131 START TEST raid_state_function_test 00:15:20.131 ************************************ 00:15:20.131 00:34:53 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 false 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=119504 00:15:20.131 Process raid pid: 119504 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119504' 00:15:20.131 00:34:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119504 /var/tmp/spdk-raid.sock 00:15:20.131 00:34:53 -- common/autotest_common.sh@817 -- # '[' -z 119504 ']' 00:15:20.131 00:34:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.131 00:34:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:20.131 00:34:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.131 00:34:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:20.131 00:34:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.131 [2024-04-27 00:34:53.447600] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:20.131 [2024-04-27 00:34:53.447825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.131 [2024-04-27 00:34:53.616365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.390 [2024-04-27 00:34:53.807676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.649 [2024-04-27 00:34:53.986314] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.907 00:34:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.907 00:34:54 -- common/autotest_common.sh@850 -- # return 0 00:15:20.907 00:34:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:21.165 [2024-04-27 00:34:54.560569] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:21.165 [2024-04-27 00:34:54.560639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:21.165 [2024-04-27 00:34:54.560668] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:21.165 [2024-04-27 00:34:54.560689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.165 00:34:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.423 00:34:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.423 "name": "Existed_Raid", 00:15:21.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.423 "strip_size_kb": 64, 00:15:21.423 "state": "configuring", 00:15:21.423 "raid_level": "raid0", 00:15:21.423 "superblock": false, 00:15:21.423 "num_base_bdevs": 2, 00:15:21.423 "num_base_bdevs_discovered": 0, 00:15:21.423 "num_base_bdevs_operational": 2, 00:15:21.423 "base_bdevs_list": [ 00:15:21.423 { 00:15:21.423 "name": "BaseBdev1", 00:15:21.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.423 "is_configured": false, 00:15:21.423 "data_offset": 0, 00:15:21.423 "data_size": 0 00:15:21.423 }, 00:15:21.423 { 00:15:21.423 "name": "BaseBdev2", 00:15:21.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.423 "is_configured": false, 00:15:21.423 "data_offset": 0, 00:15:21.423 "data_size": 0 00:15:21.423 } 00:15:21.423 ] 00:15:21.423 }' 00:15:21.423 00:34:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.423 00:34:54 -- common/autotest_common.sh@10 -- # set +x 00:15:21.990 00:34:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.248 [2024-04-27 00:34:55.632716] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.248 [2024-04-27 00:34:55.632791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:15:22.248 00:34:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.506 [2024-04-27 00:34:55.836766] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:22.506 [2024-04-27 00:34:55.836895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:22.506 [2024-04-27 00:34:55.836926] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.506 [2024-04-27 00:34:55.836975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.506 00:34:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:22.764 [2024-04-27 00:34:56.118304] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.764 BaseBdev1 00:15:22.764 00:34:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:22.764 00:34:56 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:22.764 00:34:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:22.764 00:34:56 -- common/autotest_common.sh@887 -- # local i 00:15:22.764 00:34:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:22.764 00:34:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:22.765 00:34:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.023 00:34:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.023 [ 00:15:23.023 { 00:15:23.023 "name": "BaseBdev1", 00:15:23.023 "aliases": [ 00:15:23.023 "2df24fb5-0fa7-4108-9922-d07bb4391435" 00:15:23.023 ], 00:15:23.023 "product_name": "Malloc disk", 00:15:23.023 "block_size": 512, 00:15:23.023 "num_blocks": 65536, 00:15:23.023 "uuid": "2df24fb5-0fa7-4108-9922-d07bb4391435", 00:15:23.023 "assigned_rate_limits": { 00:15:23.023 "rw_ios_per_sec": 0, 00:15:23.023 "rw_mbytes_per_sec": 0, 00:15:23.023 "r_mbytes_per_sec": 0, 00:15:23.023 "w_mbytes_per_sec": 0 00:15:23.023 }, 00:15:23.023 "claimed": true, 00:15:23.023 "claim_type": "exclusive_write", 00:15:23.023 "zoned": false, 00:15:23.023 "supported_io_types": { 00:15:23.023 "read": true, 00:15:23.023 "write": true, 00:15:23.024 "unmap": true, 00:15:23.024 "write_zeroes": true, 00:15:23.024 "flush": true, 00:15:23.024 "reset": true, 00:15:23.024 "compare": false, 00:15:23.024 "compare_and_write": false, 00:15:23.024 "abort": true, 00:15:23.024 "nvme_admin": false, 00:15:23.024 "nvme_io": false 00:15:23.024 }, 00:15:23.024 "memory_domains": [ 00:15:23.024 { 00:15:23.024 "dma_device_id": "system", 00:15:23.024 "dma_device_type": 1 00:15:23.024 }, 00:15:23.024 { 00:15:23.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.024 "dma_device_type": 2 00:15:23.024 } 00:15:23.024 ], 00:15:23.024 "driver_specific": {} 00:15:23.024 } 00:15:23.024 ] 00:15:23.024 00:34:56 -- common/autotest_common.sh@893 -- # return 0 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.024 00:34:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.282 00:34:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.282 "name": "Existed_Raid", 00:15:23.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.282 "strip_size_kb": 64, 00:15:23.282 "state": "configuring", 00:15:23.282 "raid_level": "raid0", 00:15:23.282 "superblock": false, 00:15:23.282 "num_base_bdevs": 2, 00:15:23.282 "num_base_bdevs_discovered": 1, 00:15:23.282 "num_base_bdevs_operational": 2, 00:15:23.282 "base_bdevs_list": [ 00:15:23.282 { 00:15:23.282 "name": "BaseBdev1", 00:15:23.282 "uuid": "2df24fb5-0fa7-4108-9922-d07bb4391435", 00:15:23.282 "is_configured": true, 00:15:23.282 "data_offset": 0, 00:15:23.282 "data_size": 65536 00:15:23.282 }, 00:15:23.282 { 00:15:23.282 "name": "BaseBdev2", 00:15:23.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.282 "is_configured": false, 00:15:23.282 "data_offset": 0, 00:15:23.282 "data_size": 0 00:15:23.282 } 00:15:23.282 ] 00:15:23.282 }' 00:15:23.282 00:34:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.282 00:34:56 -- common/autotest_common.sh@10 -- # set +x 00:15:23.849 00:34:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:24.106 [2024-04-27 00:34:57.606965] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.106 [2024-04-27 00:34:57.607043] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:15:24.106 00:34:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:24.106 00:34:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:24.364 [2024-04-27 00:34:57.815022] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.364 [2024-04-27 00:34:57.817012] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.364 [2024-04-27 00:34:57.817087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.364 00:34:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.365 00:34:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.365 00:34:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.365 00:34:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.365 00:34:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.623 00:34:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.623 "name": "Existed_Raid", 00:15:24.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.623 "strip_size_kb": 64, 00:15:24.623 "state": "configuring", 00:15:24.623 "raid_level": "raid0", 00:15:24.623 "superblock": false, 00:15:24.623 "num_base_bdevs": 2, 00:15:24.623 "num_base_bdevs_discovered": 1, 00:15:24.623 "num_base_bdevs_operational": 2, 00:15:24.623 "base_bdevs_list": [ 00:15:24.623 { 00:15:24.623 "name": "BaseBdev1", 00:15:24.623 "uuid": "2df24fb5-0fa7-4108-9922-d07bb4391435", 00:15:24.623 "is_configured": true, 00:15:24.623 "data_offset": 0, 00:15:24.623 "data_size": 65536 00:15:24.623 }, 00:15:24.623 { 00:15:24.623 "name": "BaseBdev2", 00:15:24.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.623 "is_configured": false, 00:15:24.623 "data_offset": 0, 00:15:24.623 "data_size": 0 00:15:24.623 } 00:15:24.623 ] 00:15:24.623 }' 00:15:24.623 00:34:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.623 00:34:58 -- common/autotest_common.sh@10 -- # set +x 00:15:25.189 00:34:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.447 [2024-04-27 00:34:58.937355] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.447 [2024-04-27 00:34:58.937425] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:15:25.447 [2024-04-27 00:34:58.937435] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:25.447 [2024-04-27 00:34:58.937548] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:25.447 [2024-04-27 00:34:58.937958] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:15:25.447 [2024-04-27 00:34:58.937976] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:15:25.447 [2024-04-27 00:34:58.938321] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.447 BaseBdev2 00:15:25.447 00:34:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:25.447 00:34:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:15:25.447 00:34:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:25.447 00:34:58 -- common/autotest_common.sh@887 -- # local i 00:15:25.447 00:34:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:25.447 00:34:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:25.447 00:34:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.706 00:34:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.964 [ 00:15:25.964 { 00:15:25.964 "name": "BaseBdev2", 00:15:25.964 "aliases": [ 00:15:25.965 "19c8aba7-472d-4fb9-8d8b-3bd54f17082b" 00:15:25.965 ], 00:15:25.965 "product_name": "Malloc disk", 00:15:25.965 "block_size": 512, 00:15:25.965 "num_blocks": 65536, 00:15:25.965 "uuid": "19c8aba7-472d-4fb9-8d8b-3bd54f17082b", 00:15:25.965 "assigned_rate_limits": { 00:15:25.965 "rw_ios_per_sec": 0, 00:15:25.965 "rw_mbytes_per_sec": 0, 00:15:25.965 "r_mbytes_per_sec": 0, 00:15:25.965 "w_mbytes_per_sec": 0 00:15:25.965 }, 00:15:25.965 "claimed": true, 00:15:25.965 "claim_type": "exclusive_write", 00:15:25.965 "zoned": false, 00:15:25.965 "supported_io_types": { 00:15:25.965 "read": true, 00:15:25.965 "write": true, 00:15:25.965 "unmap": true, 00:15:25.965 "write_zeroes": true, 00:15:25.965 "flush": true, 00:15:25.965 "reset": true, 00:15:25.965 "compare": false, 00:15:25.965 "compare_and_write": false, 00:15:25.965 "abort": true, 00:15:25.965 "nvme_admin": false, 00:15:25.965 "nvme_io": false 00:15:25.965 }, 00:15:25.965 "memory_domains": [ 00:15:25.965 { 00:15:25.965 "dma_device_id": "system", 00:15:25.965 "dma_device_type": 1 00:15:25.965 }, 00:15:25.965 { 00:15:25.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.965 "dma_device_type": 2 00:15:25.965 } 00:15:25.965 ], 00:15:25.965 "driver_specific": {} 00:15:25.965 } 00:15:25.965 ] 00:15:25.965 00:34:59 -- common/autotest_common.sh@893 -- # return 0 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.965 00:34:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.224 00:34:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.224 "name": "Existed_Raid", 00:15:26.224 "uuid": "164ceaf1-ebbc-4a9f-966d-b786fbe9ad43", 00:15:26.224 "strip_size_kb": 64, 00:15:26.224 "state": "online", 00:15:26.224 "raid_level": "raid0", 00:15:26.224 "superblock": false, 00:15:26.224 "num_base_bdevs": 2, 00:15:26.224 "num_base_bdevs_discovered": 2, 00:15:26.224 "num_base_bdevs_operational": 2, 00:15:26.224 "base_bdevs_list": [ 00:15:26.224 { 00:15:26.224 "name": "BaseBdev1", 00:15:26.224 "uuid": "2df24fb5-0fa7-4108-9922-d07bb4391435", 00:15:26.224 "is_configured": true, 00:15:26.224 "data_offset": 0, 00:15:26.224 "data_size": 65536 00:15:26.224 }, 00:15:26.224 { 00:15:26.224 "name": "BaseBdev2", 00:15:26.224 "uuid": "19c8aba7-472d-4fb9-8d8b-3bd54f17082b", 00:15:26.224 "is_configured": true, 00:15:26.224 "data_offset": 0, 00:15:26.224 "data_size": 65536 00:15:26.224 } 00:15:26.224 ] 00:15:26.224 }' 00:15:26.224 00:34:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.224 00:34:59 -- common/autotest_common.sh@10 -- # set +x 00:15:26.790 00:35:00 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:27.049 [2024-04-27 00:35:00.597907] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.049 [2024-04-27 00:35:00.597949] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.049 [2024-04-27 00:35:00.598030] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.308 00:35:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.567 00:35:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.567 "name": "Existed_Raid", 00:15:27.567 "uuid": "164ceaf1-ebbc-4a9f-966d-b786fbe9ad43", 00:15:27.567 "strip_size_kb": 64, 00:15:27.567 "state": "offline", 00:15:27.567 "raid_level": "raid0", 00:15:27.567 "superblock": false, 00:15:27.567 "num_base_bdevs": 2, 00:15:27.567 "num_base_bdevs_discovered": 1, 00:15:27.567 "num_base_bdevs_operational": 1, 00:15:27.567 "base_bdevs_list": [ 00:15:27.567 { 00:15:27.567 "name": null, 00:15:27.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.567 "is_configured": false, 00:15:27.567 "data_offset": 0, 00:15:27.567 "data_size": 65536 00:15:27.567 }, 00:15:27.567 { 00:15:27.567 "name": "BaseBdev2", 00:15:27.567 "uuid": "19c8aba7-472d-4fb9-8d8b-3bd54f17082b", 00:15:27.567 "is_configured": true, 00:15:27.567 "data_offset": 0, 00:15:27.567 "data_size": 65536 00:15:27.567 } 00:15:27.567 ] 00:15:27.567 }' 00:15:27.567 00:35:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.567 00:35:00 -- common/autotest_common.sh@10 -- # set +x 00:15:28.134 00:35:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:28.134 00:35:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:28.134 00:35:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.134 00:35:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:28.393 00:35:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:28.393 00:35:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.393 00:35:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:28.663 [2024-04-27 00:35:02.110125] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.663 [2024-04-27 00:35:02.110231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:15:28.663 00:35:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:28.663 00:35:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:28.663 00:35:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.663 00:35:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:28.921 00:35:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:28.921 00:35:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:28.921 00:35:02 -- bdev/bdev_raid.sh@287 -- # killprocess 119504 00:15:28.921 00:35:02 -- common/autotest_common.sh@936 -- # '[' -z 119504 ']' 00:15:28.921 00:35:02 -- common/autotest_common.sh@940 -- # kill -0 119504 00:15:28.921 00:35:02 -- common/autotest_common.sh@941 -- # uname 00:15:28.921 00:35:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.921 00:35:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119504 00:15:28.921 00:35:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:28.921 00:35:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:28.921 00:35:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119504' 00:15:28.921 killing process with pid 119504 00:15:28.921 00:35:02 -- common/autotest_common.sh@955 -- # kill 119504 00:15:28.921 00:35:02 -- common/autotest_common.sh@960 -- # wait 119504 00:15:28.921 [2024-04-27 00:35:02.476599] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:28.921 [2024-04-27 00:35:02.476716] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.295 ************************************ 00:15:30.295 END TEST raid_state_function_test 00:15:30.295 ************************************ 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:30.295 00:15:30.295 real 0m10.090s 00:15:30.295 user 0m17.520s 00:15:30.295 sys 0m1.246s 00:15:30.295 00:35:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.295 00:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:30.295 00:35:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:30.295 00:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.295 00:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:30.295 ************************************ 00:15:30.295 START TEST raid_state_function_test_sb 00:15:30.295 ************************************ 00:15:30.295 00:35:03 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 2 true 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=119829 00:15:30.295 Process raid pid: 119829 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119829' 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:30.295 00:35:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119829 /var/tmp/spdk-raid.sock 00:15:30.295 00:35:03 -- common/autotest_common.sh@817 -- # '[' -z 119829 ']' 00:15:30.295 00:35:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:30.295 00:35:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:30.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:30.295 00:35:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:30.295 00:35:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:30.295 00:35:03 -- common/autotest_common.sh@10 -- # set +x 00:15:30.295 [2024-04-27 00:35:03.626375] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:30.295 [2024-04-27 00:35:03.626575] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.295 [2024-04-27 00:35:03.796099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.554 [2024-04-27 00:35:03.984294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.824 [2024-04-27 00:35:04.176577] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.101 00:35:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:31.101 00:35:04 -- common/autotest_common.sh@850 -- # return 0 00:15:31.101 00:35:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:31.359 [2024-04-27 00:35:04.823073] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.359 [2024-04-27 00:35:04.823163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.359 [2024-04-27 00:35:04.823177] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.359 [2024-04-27 00:35:04.823199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.359 00:35:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.618 00:35:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.618 "name": "Existed_Raid", 00:15:31.618 "uuid": "f6b3dc80-492c-41ef-9153-896def0e278a", 00:15:31.618 "strip_size_kb": 64, 00:15:31.618 "state": "configuring", 00:15:31.618 "raid_level": "raid0", 00:15:31.618 "superblock": true, 00:15:31.618 "num_base_bdevs": 2, 00:15:31.618 "num_base_bdevs_discovered": 0, 00:15:31.618 "num_base_bdevs_operational": 2, 00:15:31.618 "base_bdevs_list": [ 00:15:31.618 { 00:15:31.618 "name": "BaseBdev1", 00:15:31.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.618 "is_configured": false, 00:15:31.618 "data_offset": 0, 00:15:31.618 "data_size": 0 00:15:31.618 }, 00:15:31.618 { 00:15:31.618 "name": "BaseBdev2", 00:15:31.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.618 "is_configured": false, 00:15:31.618 "data_offset": 0, 00:15:31.618 "data_size": 0 00:15:31.618 } 00:15:31.618 ] 00:15:31.618 }' 00:15:31.618 00:35:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.618 00:35:05 -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 00:35:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.444 [2024-04-27 00:35:05.923184] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.444 [2024-04-27 00:35:05.923230] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:15:32.444 00:35:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:32.702 [2024-04-27 00:35:06.115210] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.702 [2024-04-27 00:35:06.115283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.702 [2024-04-27 00:35:06.115295] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.702 [2024-04-27 00:35:06.115323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.702 00:35:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.961 [2024-04-27 00:35:06.346030] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.961 BaseBdev1 00:15:32.961 00:35:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:32.961 00:35:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:32.961 00:35:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:32.961 00:35:06 -- common/autotest_common.sh@887 -- # local i 00:15:32.961 00:35:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:32.961 00:35:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:32.961 00:35:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.219 00:35:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.478 [ 00:15:33.478 { 00:15:33.478 "name": "BaseBdev1", 00:15:33.478 "aliases": [ 00:15:33.478 "239aeab3-d356-484d-9e42-0a8566ecd515" 00:15:33.478 ], 00:15:33.478 "product_name": "Malloc disk", 00:15:33.478 "block_size": 512, 00:15:33.478 "num_blocks": 65536, 00:15:33.478 "uuid": "239aeab3-d356-484d-9e42-0a8566ecd515", 00:15:33.478 "assigned_rate_limits": { 00:15:33.478 "rw_ios_per_sec": 0, 00:15:33.478 "rw_mbytes_per_sec": 0, 00:15:33.478 "r_mbytes_per_sec": 0, 00:15:33.478 "w_mbytes_per_sec": 0 00:15:33.478 }, 00:15:33.478 "claimed": true, 00:15:33.478 "claim_type": "exclusive_write", 00:15:33.478 "zoned": false, 00:15:33.478 "supported_io_types": { 00:15:33.478 "read": true, 00:15:33.478 "write": true, 00:15:33.478 "unmap": true, 00:15:33.478 "write_zeroes": true, 00:15:33.478 "flush": true, 00:15:33.478 "reset": true, 00:15:33.478 "compare": false, 00:15:33.478 "compare_and_write": false, 00:15:33.478 "abort": true, 00:15:33.478 "nvme_admin": false, 00:15:33.478 "nvme_io": false 00:15:33.478 }, 00:15:33.478 "memory_domains": [ 00:15:33.478 { 00:15:33.478 "dma_device_id": "system", 00:15:33.478 "dma_device_type": 1 00:15:33.478 }, 00:15:33.478 { 00:15:33.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.478 "dma_device_type": 2 00:15:33.478 } 00:15:33.478 ], 00:15:33.478 "driver_specific": {} 00:15:33.478 } 00:15:33.478 ] 00:15:33.478 00:35:06 -- common/autotest_common.sh@893 -- # return 0 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.478 00:35:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.738 00:35:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.738 "name": "Existed_Raid", 00:15:33.738 "uuid": "ac0aefbc-ba7f-491e-9f13-5b8b5aa7d51d", 00:15:33.738 "strip_size_kb": 64, 00:15:33.738 "state": "configuring", 00:15:33.738 "raid_level": "raid0", 00:15:33.738 "superblock": true, 00:15:33.738 "num_base_bdevs": 2, 00:15:33.738 "num_base_bdevs_discovered": 1, 00:15:33.738 "num_base_bdevs_operational": 2, 00:15:33.738 "base_bdevs_list": [ 00:15:33.738 { 00:15:33.738 "name": "BaseBdev1", 00:15:33.738 "uuid": "239aeab3-d356-484d-9e42-0a8566ecd515", 00:15:33.738 "is_configured": true, 00:15:33.738 "data_offset": 2048, 00:15:33.738 "data_size": 63488 00:15:33.738 }, 00:15:33.738 { 00:15:33.738 "name": "BaseBdev2", 00:15:33.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.738 "is_configured": false, 00:15:33.738 "data_offset": 0, 00:15:33.738 "data_size": 0 00:15:33.738 } 00:15:33.738 ] 00:15:33.738 }' 00:15:33.738 00:35:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.738 00:35:07 -- common/autotest_common.sh@10 -- # set +x 00:15:34.307 00:35:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:34.565 [2024-04-27 00:35:07.934457] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.565 [2024-04-27 00:35:07.934545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:15:34.565 00:35:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:34.565 00:35:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:34.823 00:35:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:35.082 BaseBdev1 00:15:35.082 00:35:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:35.082 00:35:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:35.082 00:35:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:35.082 00:35:08 -- common/autotest_common.sh@887 -- # local i 00:15:35.082 00:35:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:35.082 00:35:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:35.082 00:35:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.340 00:35:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:35.599 [ 00:15:35.599 { 00:15:35.600 "name": "BaseBdev1", 00:15:35.600 "aliases": [ 00:15:35.600 "b396058f-537b-45dd-9ff9-8fe72f755bf8" 00:15:35.600 ], 00:15:35.600 "product_name": "Malloc disk", 00:15:35.600 "block_size": 512, 00:15:35.600 "num_blocks": 65536, 00:15:35.600 "uuid": "b396058f-537b-45dd-9ff9-8fe72f755bf8", 00:15:35.600 "assigned_rate_limits": { 00:15:35.600 "rw_ios_per_sec": 0, 00:15:35.600 "rw_mbytes_per_sec": 0, 00:15:35.600 "r_mbytes_per_sec": 0, 00:15:35.600 "w_mbytes_per_sec": 0 00:15:35.600 }, 00:15:35.600 "claimed": false, 00:15:35.600 "zoned": false, 00:15:35.600 "supported_io_types": { 00:15:35.600 "read": true, 00:15:35.600 "write": true, 00:15:35.600 "unmap": true, 00:15:35.600 "write_zeroes": true, 00:15:35.600 "flush": true, 00:15:35.600 "reset": true, 00:15:35.600 "compare": false, 00:15:35.600 "compare_and_write": false, 00:15:35.600 "abort": true, 00:15:35.600 "nvme_admin": false, 00:15:35.600 "nvme_io": false 00:15:35.600 }, 00:15:35.600 "memory_domains": [ 00:15:35.600 { 00:15:35.600 "dma_device_id": "system", 00:15:35.600 "dma_device_type": 1 00:15:35.600 }, 00:15:35.600 { 00:15:35.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.600 "dma_device_type": 2 00:15:35.600 } 00:15:35.600 ], 00:15:35.600 "driver_specific": {} 00:15:35.600 } 00:15:35.600 ] 00:15:35.600 00:35:09 -- common/autotest_common.sh@893 -- # return 0 00:15:35.600 00:35:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:35.859 [2024-04-27 00:35:09.244243] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.859 [2024-04-27 00:35:09.246291] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.859 [2024-04-27 00:35:09.246403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.859 00:35:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.118 00:35:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.118 "name": "Existed_Raid", 00:15:36.118 "uuid": "ee88ba6a-226c-40f1-964c-af5222e78f00", 00:15:36.118 "strip_size_kb": 64, 00:15:36.118 "state": "configuring", 00:15:36.118 "raid_level": "raid0", 00:15:36.118 "superblock": true, 00:15:36.118 "num_base_bdevs": 2, 00:15:36.118 "num_base_bdevs_discovered": 1, 00:15:36.118 "num_base_bdevs_operational": 2, 00:15:36.118 "base_bdevs_list": [ 00:15:36.118 { 00:15:36.118 "name": "BaseBdev1", 00:15:36.118 "uuid": "b396058f-537b-45dd-9ff9-8fe72f755bf8", 00:15:36.118 "is_configured": true, 00:15:36.118 "data_offset": 2048, 00:15:36.118 "data_size": 63488 00:15:36.118 }, 00:15:36.118 { 00:15:36.118 "name": "BaseBdev2", 00:15:36.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.118 "is_configured": false, 00:15:36.118 "data_offset": 0, 00:15:36.118 "data_size": 0 00:15:36.118 } 00:15:36.118 ] 00:15:36.118 }' 00:15:36.118 00:35:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.118 00:35:09 -- common/autotest_common.sh@10 -- # set +x 00:15:36.685 00:35:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.943 [2024-04-27 00:35:10.409070] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.943 [2024-04-27 00:35:10.409427] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:15:36.943 [2024-04-27 00:35:10.409467] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.943 [2024-04-27 00:35:10.409624] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:36.943 [2024-04-27 00:35:10.410052] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:15:36.943 [2024-04-27 00:35:10.410076] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:15:36.943 [2024-04-27 00:35:10.410247] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.943 BaseBdev2 00:15:36.943 00:35:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:36.943 00:35:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:15:36.943 00:35:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:36.943 00:35:10 -- common/autotest_common.sh@887 -- # local i 00:15:36.943 00:35:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:36.943 00:35:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:36.944 00:35:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.202 00:35:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.461 [ 00:15:37.461 { 00:15:37.461 "name": "BaseBdev2", 00:15:37.461 "aliases": [ 00:15:37.461 "95289f30-064a-44fd-8780-85c8e4550762" 00:15:37.461 ], 00:15:37.461 "product_name": "Malloc disk", 00:15:37.461 "block_size": 512, 00:15:37.461 "num_blocks": 65536, 00:15:37.461 "uuid": "95289f30-064a-44fd-8780-85c8e4550762", 00:15:37.461 "assigned_rate_limits": { 00:15:37.461 "rw_ios_per_sec": 0, 00:15:37.461 "rw_mbytes_per_sec": 0, 00:15:37.461 "r_mbytes_per_sec": 0, 00:15:37.461 "w_mbytes_per_sec": 0 00:15:37.461 }, 00:15:37.461 "claimed": true, 00:15:37.461 "claim_type": "exclusive_write", 00:15:37.461 "zoned": false, 00:15:37.461 "supported_io_types": { 00:15:37.461 "read": true, 00:15:37.461 "write": true, 00:15:37.461 "unmap": true, 00:15:37.461 "write_zeroes": true, 00:15:37.461 "flush": true, 00:15:37.461 "reset": true, 00:15:37.461 "compare": false, 00:15:37.461 "compare_and_write": false, 00:15:37.461 "abort": true, 00:15:37.461 "nvme_admin": false, 00:15:37.461 "nvme_io": false 00:15:37.461 }, 00:15:37.461 "memory_domains": [ 00:15:37.461 { 00:15:37.461 "dma_device_id": "system", 00:15:37.461 "dma_device_type": 1 00:15:37.461 }, 00:15:37.461 { 00:15:37.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.461 "dma_device_type": 2 00:15:37.461 } 00:15:37.461 ], 00:15:37.461 "driver_specific": {} 00:15:37.461 } 00:15:37.461 ] 00:15:37.461 00:35:10 -- common/autotest_common.sh@893 -- # return 0 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.461 00:35:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.719 00:35:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.720 "name": "Existed_Raid", 00:15:37.720 "uuid": "ee88ba6a-226c-40f1-964c-af5222e78f00", 00:15:37.720 "strip_size_kb": 64, 00:15:37.720 "state": "online", 00:15:37.720 "raid_level": "raid0", 00:15:37.720 "superblock": true, 00:15:37.720 "num_base_bdevs": 2, 00:15:37.720 "num_base_bdevs_discovered": 2, 00:15:37.720 "num_base_bdevs_operational": 2, 00:15:37.720 "base_bdevs_list": [ 00:15:37.720 { 00:15:37.720 "name": "BaseBdev1", 00:15:37.720 "uuid": "b396058f-537b-45dd-9ff9-8fe72f755bf8", 00:15:37.720 "is_configured": true, 00:15:37.720 "data_offset": 2048, 00:15:37.720 "data_size": 63488 00:15:37.720 }, 00:15:37.720 { 00:15:37.720 "name": "BaseBdev2", 00:15:37.720 "uuid": "95289f30-064a-44fd-8780-85c8e4550762", 00:15:37.720 "is_configured": true, 00:15:37.720 "data_offset": 2048, 00:15:37.720 "data_size": 63488 00:15:37.720 } 00:15:37.720 ] 00:15:37.720 }' 00:15:37.720 00:35:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.720 00:35:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.322 00:35:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:38.580 [2024-04-27 00:35:11.933564] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.580 [2024-04-27 00:35:11.933603] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:38.580 [2024-04-27 00:35:11.933654] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.580 00:35:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.839 00:35:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.839 "name": "Existed_Raid", 00:15:38.839 "uuid": "ee88ba6a-226c-40f1-964c-af5222e78f00", 00:15:38.839 "strip_size_kb": 64, 00:15:38.839 "state": "offline", 00:15:38.839 "raid_level": "raid0", 00:15:38.839 "superblock": true, 00:15:38.839 "num_base_bdevs": 2, 00:15:38.839 "num_base_bdevs_discovered": 1, 00:15:38.839 "num_base_bdevs_operational": 1, 00:15:38.839 "base_bdevs_list": [ 00:15:38.839 { 00:15:38.839 "name": null, 00:15:38.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.839 "is_configured": false, 00:15:38.839 "data_offset": 2048, 00:15:38.839 "data_size": 63488 00:15:38.839 }, 00:15:38.839 { 00:15:38.839 "name": "BaseBdev2", 00:15:38.839 "uuid": "95289f30-064a-44fd-8780-85c8e4550762", 00:15:38.839 "is_configured": true, 00:15:38.839 "data_offset": 2048, 00:15:38.839 "data_size": 63488 00:15:38.839 } 00:15:38.839 ] 00:15:38.839 }' 00:15:38.839 00:35:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.839 00:35:12 -- common/autotest_common.sh@10 -- # set +x 00:15:39.405 00:35:12 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:39.405 00:35:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:39.405 00:35:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.405 00:35:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:39.663 00:35:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:39.663 00:35:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:39.663 00:35:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:39.921 [2024-04-27 00:35:13.337711] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.921 [2024-04-27 00:35:13.337950] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:15:39.921 00:35:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:39.921 00:35:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:39.921 00:35:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.921 00:35:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:40.180 00:35:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:40.180 00:35:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:40.180 00:35:13 -- bdev/bdev_raid.sh@287 -- # killprocess 119829 00:15:40.180 00:35:13 -- common/autotest_common.sh@936 -- # '[' -z 119829 ']' 00:15:40.180 00:35:13 -- common/autotest_common.sh@940 -- # kill -0 119829 00:15:40.180 00:35:13 -- common/autotest_common.sh@941 -- # uname 00:15:40.180 00:35:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.180 00:35:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 119829 00:15:40.180 killing process with pid 119829 00:15:40.180 00:35:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:40.180 00:35:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:40.180 00:35:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 119829' 00:15:40.180 00:35:13 -- common/autotest_common.sh@955 -- # kill 119829 00:15:40.180 00:35:13 -- common/autotest_common.sh@960 -- # wait 119829 00:15:40.180 [2024-04-27 00:35:13.665664] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.180 [2024-04-27 00:35:13.665809] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.115 ************************************ 00:15:41.115 END TEST raid_state_function_test_sb 00:15:41.115 ************************************ 00:15:41.115 00:35:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:41.115 00:15:41.115 real 0m11.116s 00:15:41.115 user 0m19.404s 00:15:41.115 sys 0m1.285s 00:15:41.115 00:35:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.115 00:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:41.374 00:35:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:41.374 00:35:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.374 00:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:41.374 ************************************ 00:15:41.374 START TEST raid_superblock_test 00:15:41.374 ************************************ 00:15:41.374 00:35:14 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 2 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=120169 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:41.374 00:35:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120169 /var/tmp/spdk-raid.sock 00:15:41.374 00:35:14 -- common/autotest_common.sh@817 -- # '[' -z 120169 ']' 00:15:41.374 00:35:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.374 00:35:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:41.374 00:35:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:41.374 00:35:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:41.374 00:35:14 -- common/autotest_common.sh@10 -- # set +x 00:15:41.374 [2024-04-27 00:35:14.831514] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:41.374 [2024-04-27 00:35:14.832005] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120169 ] 00:15:41.634 [2024-04-27 00:35:15.000609] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.634 [2024-04-27 00:35:15.190860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.894 [2024-04-27 00:35:15.367407] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.462 00:35:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.462 00:35:15 -- common/autotest_common.sh@850 -- # return 0 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.462 00:35:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:42.462 malloc1 00:15:42.462 00:35:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:42.721 [2024-04-27 00:35:16.224356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:42.721 [2024-04-27 00:35:16.224646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.721 [2024-04-27 00:35:16.224801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:42.721 [2024-04-27 00:35:16.224982] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.721 [2024-04-27 00:35:16.227642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.721 [2024-04-27 00:35:16.227844] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:42.721 pt1 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:42.721 00:35:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:42.980 malloc2 00:15:42.980 00:35:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.239 [2024-04-27 00:35:16.704823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.239 [2024-04-27 00:35:16.705065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.239 [2024-04-27 00:35:16.705232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:43.239 [2024-04-27 00:35:16.705410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.239 [2024-04-27 00:35:16.707788] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.239 [2024-04-27 00:35:16.707970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.239 pt2 00:15:43.239 00:35:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:43.239 00:35:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:43.239 00:35:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:43.497 [2024-04-27 00:35:16.912944] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.497 [2024-04-27 00:35:16.915255] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.498 [2024-04-27 00:35:16.915634] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:15:43.498 [2024-04-27 00:35:16.915766] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:43.498 [2024-04-27 00:35:16.915958] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:43.498 [2024-04-27 00:35:16.916378] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:15:43.498 [2024-04-27 00:35:16.916534] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:15:43.498 [2024-04-27 00:35:16.916862] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.498 00:35:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.756 00:35:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.756 "name": "raid_bdev1", 00:15:43.756 "uuid": "1dfea2db-4748-4e6c-aacd-580389573cdb", 00:15:43.756 "strip_size_kb": 64, 00:15:43.756 "state": "online", 00:15:43.756 "raid_level": "raid0", 00:15:43.756 "superblock": true, 00:15:43.756 "num_base_bdevs": 2, 00:15:43.756 "num_base_bdevs_discovered": 2, 00:15:43.756 "num_base_bdevs_operational": 2, 00:15:43.756 "base_bdevs_list": [ 00:15:43.756 { 00:15:43.756 "name": "pt1", 00:15:43.756 "uuid": "e46d4357-6a00-5c18-bc46-150a9282d037", 00:15:43.756 "is_configured": true, 00:15:43.756 "data_offset": 2048, 00:15:43.756 "data_size": 63488 00:15:43.756 }, 00:15:43.756 { 00:15:43.756 "name": "pt2", 00:15:43.756 "uuid": "d4f05734-f7e6-5c78-9993-351d0983a93c", 00:15:43.756 "is_configured": true, 00:15:43.756 "data_offset": 2048, 00:15:43.756 "data_size": 63488 00:15:43.756 } 00:15:43.756 ] 00:15:43.756 }' 00:15:43.756 00:35:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.756 00:35:17 -- common/autotest_common.sh@10 -- # set +x 00:15:44.324 00:35:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:44.324 00:35:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:44.616 [2024-04-27 00:35:18.053401] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.616 00:35:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1dfea2db-4748-4e6c-aacd-580389573cdb 00:15:44.616 00:35:18 -- bdev/bdev_raid.sh@380 -- # '[' -z 1dfea2db-4748-4e6c-aacd-580389573cdb ']' 00:15:44.616 00:35:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:44.933 [2024-04-27 00:35:18.269214] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.933 [2024-04-27 00:35:18.269456] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.933 [2024-04-27 00:35:18.269643] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.933 [2024-04-27 00:35:18.269820] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.933 [2024-04-27 00:35:18.269929] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:15:44.933 00:35:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.933 00:35:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:44.933 00:35:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:44.933 00:35:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:44.933 00:35:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.933 00:35:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:45.193 00:35:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:45.193 00:35:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:45.452 00:35:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:45.452 00:35:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:45.710 00:35:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:45.710 00:35:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:45.710 00:35:19 -- common/autotest_common.sh@638 -- # local es=0 00:15:45.710 00:35:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:45.710 00:35:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.710 00:35:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:45.710 00:35:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.710 00:35:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:45.710 00:35:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.710 00:35:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:45.710 00:35:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.710 00:35:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:45.710 00:35:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:45.970 [2024-04-27 00:35:19.417455] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:45.970 [2024-04-27 00:35:19.419649] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:45.970 [2024-04-27 00:35:19.419845] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:45.970 [2024-04-27 00:35:19.420080] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:45.970 [2024-04-27 00:35:19.420237] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.970 [2024-04-27 00:35:19.420369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:15:45.970 request: 00:15:45.970 { 00:15:45.970 "name": "raid_bdev1", 00:15:45.970 "raid_level": "raid0", 00:15:45.970 "base_bdevs": [ 00:15:45.970 "malloc1", 00:15:45.970 "malloc2" 00:15:45.970 ], 00:15:45.970 "superblock": false, 00:15:45.970 "strip_size_kb": 64, 00:15:45.970 "method": "bdev_raid_create", 00:15:45.970 "req_id": 1 00:15:45.970 } 00:15:45.970 Got JSON-RPC error response 00:15:45.970 response: 00:15:45.970 { 00:15:45.970 "code": -17, 00:15:45.970 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:45.970 } 00:15:45.970 00:35:19 -- common/autotest_common.sh@641 -- # es=1 00:15:45.970 00:35:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:45.970 00:35:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:45.970 00:35:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:45.970 00:35:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.970 00:35:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:46.229 00:35:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:46.229 00:35:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:46.229 00:35:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.487 [2024-04-27 00:35:19.861452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.487 [2024-04-27 00:35:19.861742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.487 [2024-04-27 00:35:19.861932] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:46.487 [2024-04-27 00:35:19.862066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.487 [2024-04-27 00:35:19.864484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.487 [2024-04-27 00:35:19.864690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.487 [2024-04-27 00:35:19.864920] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:46.487 [2024-04-27 00:35:19.865088] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.487 pt1 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.487 00:35:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.746 00:35:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.746 "name": "raid_bdev1", 00:15:46.746 "uuid": "1dfea2db-4748-4e6c-aacd-580389573cdb", 00:15:46.746 "strip_size_kb": 64, 00:15:46.746 "state": "configuring", 00:15:46.746 "raid_level": "raid0", 00:15:46.746 "superblock": true, 00:15:46.746 "num_base_bdevs": 2, 00:15:46.746 "num_base_bdevs_discovered": 1, 00:15:46.746 "num_base_bdevs_operational": 2, 00:15:46.746 "base_bdevs_list": [ 00:15:46.746 { 00:15:46.746 "name": "pt1", 00:15:46.746 "uuid": "e46d4357-6a00-5c18-bc46-150a9282d037", 00:15:46.746 "is_configured": true, 00:15:46.746 "data_offset": 2048, 00:15:46.746 "data_size": 63488 00:15:46.746 }, 00:15:46.746 { 00:15:46.746 "name": null, 00:15:46.746 "uuid": "d4f05734-f7e6-5c78-9993-351d0983a93c", 00:15:46.746 "is_configured": false, 00:15:46.746 "data_offset": 2048, 00:15:46.746 "data_size": 63488 00:15:46.746 } 00:15:46.746 ] 00:15:46.746 }' 00:15:46.746 00:35:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.746 00:35:20 -- common/autotest_common.sh@10 -- # set +x 00:15:47.314 00:35:20 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:47.314 00:35:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:47.314 00:35:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:47.314 00:35:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.882 [2024-04-27 00:35:21.161822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.882 [2024-04-27 00:35:21.162124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.882 [2024-04-27 00:35:21.162291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:47.882 [2024-04-27 00:35:21.162502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.882 [2024-04-27 00:35:21.163074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.882 [2024-04-27 00:35:21.163241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.882 [2024-04-27 00:35:21.163467] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:47.882 [2024-04-27 00:35:21.163611] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.882 [2024-04-27 00:35:21.163877] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:15:47.882 [2024-04-27 00:35:21.164004] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:47.882 [2024-04-27 00:35:21.164170] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:47.882 [2024-04-27 00:35:21.164573] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:15:47.882 [2024-04-27 00:35:21.164731] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:15:47.882 [2024-04-27 00:35:21.164985] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.882 pt2 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.882 "name": "raid_bdev1", 00:15:47.882 "uuid": "1dfea2db-4748-4e6c-aacd-580389573cdb", 00:15:47.882 "strip_size_kb": 64, 00:15:47.882 "state": "online", 00:15:47.882 "raid_level": "raid0", 00:15:47.882 "superblock": true, 00:15:47.882 "num_base_bdevs": 2, 00:15:47.882 "num_base_bdevs_discovered": 2, 00:15:47.882 "num_base_bdevs_operational": 2, 00:15:47.882 "base_bdevs_list": [ 00:15:47.882 { 00:15:47.882 "name": "pt1", 00:15:47.882 "uuid": "e46d4357-6a00-5c18-bc46-150a9282d037", 00:15:47.882 "is_configured": true, 00:15:47.882 "data_offset": 2048, 00:15:47.882 "data_size": 63488 00:15:47.882 }, 00:15:47.882 { 00:15:47.882 "name": "pt2", 00:15:47.882 "uuid": "d4f05734-f7e6-5c78-9993-351d0983a93c", 00:15:47.882 "is_configured": true, 00:15:47.882 "data_offset": 2048, 00:15:47.882 "data_size": 63488 00:15:47.882 } 00:15:47.882 ] 00:15:47.882 }' 00:15:47.882 00:35:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.882 00:35:21 -- common/autotest_common.sh@10 -- # set +x 00:15:48.817 00:35:22 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:48.817 00:35:22 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:48.817 [2024-04-27 00:35:22.366257] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.817 00:35:22 -- bdev/bdev_raid.sh@430 -- # '[' 1dfea2db-4748-4e6c-aacd-580389573cdb '!=' 1dfea2db-4748-4e6c-aacd-580389573cdb ']' 00:15:48.817 00:35:22 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:48.817 00:35:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:48.817 00:35:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:48.817 00:35:22 -- bdev/bdev_raid.sh@511 -- # killprocess 120169 00:15:48.817 00:35:22 -- common/autotest_common.sh@936 -- # '[' -z 120169 ']' 00:15:48.817 00:35:22 -- common/autotest_common.sh@940 -- # kill -0 120169 00:15:48.817 00:35:22 -- common/autotest_common.sh@941 -- # uname 00:15:48.817 00:35:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.817 00:35:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120169 00:15:49.077 00:35:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:49.077 00:35:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:49.077 00:35:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120169' 00:15:49.077 killing process with pid 120169 00:15:49.077 00:35:22 -- common/autotest_common.sh@955 -- # kill 120169 00:15:49.077 [2024-04-27 00:35:22.417378] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.077 00:35:22 -- common/autotest_common.sh@960 -- # wait 120169 00:15:49.077 [2024-04-27 00:35:22.417594] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.077 [2024-04-27 00:35:22.417773] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.077 [2024-04-27 00:35:22.417881] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:15:49.077 [2024-04-27 00:35:22.560236] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.012 00:35:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:50.012 00:15:50.012 real 0m8.772s 00:15:50.012 user 0m14.978s 00:15:50.012 sys 0m1.094s 00:15:50.012 00:35:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:50.012 00:35:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.012 ************************************ 00:15:50.012 END TEST raid_superblock_test 00:15:50.012 ************************************ 00:15:50.012 00:35:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:50.012 00:35:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:50.012 00:35:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:50.012 00:35:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.012 00:35:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.271 ************************************ 00:15:50.271 START TEST raid_state_function_test 00:15:50.271 ************************************ 00:15:50.271 00:35:23 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 false 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=120424 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120424' 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:50.271 Process raid pid: 120424 00:15:50.271 00:35:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120424 /var/tmp/spdk-raid.sock 00:15:50.271 00:35:23 -- common/autotest_common.sh@817 -- # '[' -z 120424 ']' 00:15:50.271 00:35:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:50.271 00:35:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.271 00:35:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:50.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:50.271 00:35:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.271 00:35:23 -- common/autotest_common.sh@10 -- # set +x 00:15:50.271 [2024-04-27 00:35:23.702940] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:15:50.271 [2024-04-27 00:35:23.703346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.530 [2024-04-27 00:35:23.874780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.530 [2024-04-27 00:35:24.109210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.789 [2024-04-27 00:35:24.282316] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.358 00:35:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.358 00:35:24 -- common/autotest_common.sh@850 -- # return 0 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:51.358 [2024-04-27 00:35:24.883206] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.358 [2024-04-27 00:35:24.883454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.358 [2024-04-27 00:35:24.883589] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.358 [2024-04-27 00:35:24.883651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.358 00:35:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.616 00:35:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.616 "name": "Existed_Raid", 00:15:51.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.616 "strip_size_kb": 64, 00:15:51.616 "state": "configuring", 00:15:51.616 "raid_level": "concat", 00:15:51.616 "superblock": false, 00:15:51.616 "num_base_bdevs": 2, 00:15:51.616 "num_base_bdevs_discovered": 0, 00:15:51.616 "num_base_bdevs_operational": 2, 00:15:51.616 "base_bdevs_list": [ 00:15:51.616 { 00:15:51.616 "name": "BaseBdev1", 00:15:51.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.616 "is_configured": false, 00:15:51.616 "data_offset": 0, 00:15:51.616 "data_size": 0 00:15:51.616 }, 00:15:51.616 { 00:15:51.616 "name": "BaseBdev2", 00:15:51.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.616 "is_configured": false, 00:15:51.616 "data_offset": 0, 00:15:51.616 "data_size": 0 00:15:51.616 } 00:15:51.616 ] 00:15:51.616 }' 00:15:51.616 00:35:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.616 00:35:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.553 00:35:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.553 [2024-04-27 00:35:26.067315] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.553 [2024-04-27 00:35:26.067577] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:15:52.553 00:35:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:52.812 [2024-04-27 00:35:26.343373] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.812 [2024-04-27 00:35:26.343675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.812 [2024-04-27 00:35:26.343797] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.812 [2024-04-27 00:35:26.343870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.812 00:35:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:53.071 [2024-04-27 00:35:26.617410] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.071 BaseBdev1 00:15:53.071 00:35:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:53.071 00:35:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:53.071 00:35:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:53.071 00:35:26 -- common/autotest_common.sh@887 -- # local i 00:15:53.071 00:35:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:53.071 00:35:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:53.071 00:35:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.329 00:35:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.588 [ 00:15:53.588 { 00:15:53.588 "name": "BaseBdev1", 00:15:53.588 "aliases": [ 00:15:53.588 "d9696837-9b6e-4fa5-8be5-fe4121d3869c" 00:15:53.588 ], 00:15:53.588 "product_name": "Malloc disk", 00:15:53.588 "block_size": 512, 00:15:53.588 "num_blocks": 65536, 00:15:53.588 "uuid": "d9696837-9b6e-4fa5-8be5-fe4121d3869c", 00:15:53.588 "assigned_rate_limits": { 00:15:53.588 "rw_ios_per_sec": 0, 00:15:53.588 "rw_mbytes_per_sec": 0, 00:15:53.588 "r_mbytes_per_sec": 0, 00:15:53.588 "w_mbytes_per_sec": 0 00:15:53.588 }, 00:15:53.588 "claimed": true, 00:15:53.588 "claim_type": "exclusive_write", 00:15:53.588 "zoned": false, 00:15:53.588 "supported_io_types": { 00:15:53.588 "read": true, 00:15:53.588 "write": true, 00:15:53.588 "unmap": true, 00:15:53.588 "write_zeroes": true, 00:15:53.588 "flush": true, 00:15:53.588 "reset": true, 00:15:53.588 "compare": false, 00:15:53.588 "compare_and_write": false, 00:15:53.588 "abort": true, 00:15:53.588 "nvme_admin": false, 00:15:53.588 "nvme_io": false 00:15:53.588 }, 00:15:53.589 "memory_domains": [ 00:15:53.589 { 00:15:53.589 "dma_device_id": "system", 00:15:53.589 "dma_device_type": 1 00:15:53.589 }, 00:15:53.589 { 00:15:53.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.589 "dma_device_type": 2 00:15:53.589 } 00:15:53.589 ], 00:15:53.589 "driver_specific": {} 00:15:53.589 } 00:15:53.589 ] 00:15:53.589 00:35:27 -- common/autotest_common.sh@893 -- # return 0 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.589 00:35:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.847 00:35:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.847 "name": "Existed_Raid", 00:15:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.847 "strip_size_kb": 64, 00:15:53.847 "state": "configuring", 00:15:53.847 "raid_level": "concat", 00:15:53.847 "superblock": false, 00:15:53.847 "num_base_bdevs": 2, 00:15:53.847 "num_base_bdevs_discovered": 1, 00:15:53.847 "num_base_bdevs_operational": 2, 00:15:53.847 "base_bdevs_list": [ 00:15:53.847 { 00:15:53.847 "name": "BaseBdev1", 00:15:53.847 "uuid": "d9696837-9b6e-4fa5-8be5-fe4121d3869c", 00:15:53.847 "is_configured": true, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 65536 00:15:53.847 }, 00:15:53.847 { 00:15:53.847 "name": "BaseBdev2", 00:15:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.847 "is_configured": false, 00:15:53.847 "data_offset": 0, 00:15:53.847 "data_size": 0 00:15:53.847 } 00:15:53.847 ] 00:15:53.847 }' 00:15:53.847 00:35:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.847 00:35:27 -- common/autotest_common.sh@10 -- # set +x 00:15:54.413 00:35:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.671 [2024-04-27 00:35:28.221868] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.671 [2024-04-27 00:35:28.222070] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:15:54.671 00:35:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:54.671 00:35:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:54.930 [2024-04-27 00:35:28.481962] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.930 [2024-04-27 00:35:28.484409] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.930 [2024-04-27 00:35:28.484631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.930 00:35:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.189 00:35:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.189 "name": "Existed_Raid", 00:15:55.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.189 "strip_size_kb": 64, 00:15:55.189 "state": "configuring", 00:15:55.189 "raid_level": "concat", 00:15:55.189 "superblock": false, 00:15:55.189 "num_base_bdevs": 2, 00:15:55.189 "num_base_bdevs_discovered": 1, 00:15:55.189 "num_base_bdevs_operational": 2, 00:15:55.189 "base_bdevs_list": [ 00:15:55.189 { 00:15:55.189 "name": "BaseBdev1", 00:15:55.189 "uuid": "d9696837-9b6e-4fa5-8be5-fe4121d3869c", 00:15:55.189 "is_configured": true, 00:15:55.189 "data_offset": 0, 00:15:55.189 "data_size": 65536 00:15:55.189 }, 00:15:55.189 { 00:15:55.189 "name": "BaseBdev2", 00:15:55.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.189 "is_configured": false, 00:15:55.189 "data_offset": 0, 00:15:55.189 "data_size": 0 00:15:55.189 } 00:15:55.189 ] 00:15:55.189 }' 00:15:55.189 00:35:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.189 00:35:28 -- common/autotest_common.sh@10 -- # set +x 00:15:56.124 00:35:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.124 [2024-04-27 00:35:29.664350] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.124 [2024-04-27 00:35:29.664629] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:15:56.124 [2024-04-27 00:35:29.664676] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:56.124 [2024-04-27 00:35:29.664946] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:56.125 [2024-04-27 00:35:29.665478] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:15:56.125 [2024-04-27 00:35:29.665673] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:15:56.125 [2024-04-27 00:35:29.666035] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.125 BaseBdev2 00:15:56.125 00:35:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:56.125 00:35:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:15:56.125 00:35:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:56.125 00:35:29 -- common/autotest_common.sh@887 -- # local i 00:15:56.125 00:35:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:56.125 00:35:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:56.125 00:35:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.383 00:35:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.642 [ 00:15:56.642 { 00:15:56.642 "name": "BaseBdev2", 00:15:56.642 "aliases": [ 00:15:56.642 "023350ea-8535-44c2-a416-a6dfcd8a07a8" 00:15:56.642 ], 00:15:56.642 "product_name": "Malloc disk", 00:15:56.642 "block_size": 512, 00:15:56.642 "num_blocks": 65536, 00:15:56.642 "uuid": "023350ea-8535-44c2-a416-a6dfcd8a07a8", 00:15:56.642 "assigned_rate_limits": { 00:15:56.642 "rw_ios_per_sec": 0, 00:15:56.642 "rw_mbytes_per_sec": 0, 00:15:56.642 "r_mbytes_per_sec": 0, 00:15:56.642 "w_mbytes_per_sec": 0 00:15:56.642 }, 00:15:56.642 "claimed": true, 00:15:56.642 "claim_type": "exclusive_write", 00:15:56.642 "zoned": false, 00:15:56.642 "supported_io_types": { 00:15:56.642 "read": true, 00:15:56.642 "write": true, 00:15:56.642 "unmap": true, 00:15:56.642 "write_zeroes": true, 00:15:56.642 "flush": true, 00:15:56.642 "reset": true, 00:15:56.642 "compare": false, 00:15:56.642 "compare_and_write": false, 00:15:56.642 "abort": true, 00:15:56.642 "nvme_admin": false, 00:15:56.642 "nvme_io": false 00:15:56.642 }, 00:15:56.642 "memory_domains": [ 00:15:56.642 { 00:15:56.642 "dma_device_id": "system", 00:15:56.642 "dma_device_type": 1 00:15:56.642 }, 00:15:56.642 { 00:15:56.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.642 "dma_device_type": 2 00:15:56.642 } 00:15:56.642 ], 00:15:56.642 "driver_specific": {} 00:15:56.642 } 00:15:56.642 ] 00:15:56.642 00:35:30 -- common/autotest_common.sh@893 -- # return 0 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.642 00:35:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.901 00:35:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.901 "name": "Existed_Raid", 00:15:56.901 "uuid": "54cc8907-d278-45ea-a17f-0065908c2f8a", 00:15:56.901 "strip_size_kb": 64, 00:15:56.901 "state": "online", 00:15:56.901 "raid_level": "concat", 00:15:56.901 "superblock": false, 00:15:56.901 "num_base_bdevs": 2, 00:15:56.901 "num_base_bdevs_discovered": 2, 00:15:56.901 "num_base_bdevs_operational": 2, 00:15:56.901 "base_bdevs_list": [ 00:15:56.901 { 00:15:56.901 "name": "BaseBdev1", 00:15:56.901 "uuid": "d9696837-9b6e-4fa5-8be5-fe4121d3869c", 00:15:56.901 "is_configured": true, 00:15:56.901 "data_offset": 0, 00:15:56.901 "data_size": 65536 00:15:56.901 }, 00:15:56.901 { 00:15:56.901 "name": "BaseBdev2", 00:15:56.901 "uuid": "023350ea-8535-44c2-a416-a6dfcd8a07a8", 00:15:56.901 "is_configured": true, 00:15:56.901 "data_offset": 0, 00:15:56.901 "data_size": 65536 00:15:56.901 } 00:15:56.901 ] 00:15:56.901 }' 00:15:56.901 00:35:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.901 00:35:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.502 00:35:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:57.784 [2024-04-27 00:35:31.224821] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.785 [2024-04-27 00:35:31.225155] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.785 [2024-04-27 00:35:31.225336] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.785 00:35:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.043 00:35:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.043 "name": "Existed_Raid", 00:15:58.043 "uuid": "54cc8907-d278-45ea-a17f-0065908c2f8a", 00:15:58.043 "strip_size_kb": 64, 00:15:58.043 "state": "offline", 00:15:58.043 "raid_level": "concat", 00:15:58.043 "superblock": false, 00:15:58.043 "num_base_bdevs": 2, 00:15:58.043 "num_base_bdevs_discovered": 1, 00:15:58.043 "num_base_bdevs_operational": 1, 00:15:58.043 "base_bdevs_list": [ 00:15:58.043 { 00:15:58.043 "name": null, 00:15:58.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.043 "is_configured": false, 00:15:58.043 "data_offset": 0, 00:15:58.043 "data_size": 65536 00:15:58.043 }, 00:15:58.043 { 00:15:58.043 "name": "BaseBdev2", 00:15:58.043 "uuid": "023350ea-8535-44c2-a416-a6dfcd8a07a8", 00:15:58.043 "is_configured": true, 00:15:58.043 "data_offset": 0, 00:15:58.043 "data_size": 65536 00:15:58.043 } 00:15:58.043 ] 00:15:58.043 }' 00:15:58.043 00:35:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.043 00:35:31 -- common/autotest_common.sh@10 -- # set +x 00:15:58.978 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:58.978 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:58.978 00:35:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.978 00:35:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:58.978 00:35:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:58.978 00:35:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:58.978 00:35:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:59.237 [2024-04-27 00:35:32.812077] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.237 [2024-04-27 00:35:32.812365] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:15:59.495 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:59.495 00:35:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:59.495 00:35:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.495 00:35:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:59.753 00:35:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:59.753 00:35:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:59.753 00:35:33 -- bdev/bdev_raid.sh@287 -- # killprocess 120424 00:15:59.753 00:35:33 -- common/autotest_common.sh@936 -- # '[' -z 120424 ']' 00:15:59.753 00:35:33 -- common/autotest_common.sh@940 -- # kill -0 120424 00:15:59.753 00:35:33 -- common/autotest_common.sh@941 -- # uname 00:15:59.753 00:35:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:59.753 00:35:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120424 00:15:59.753 00:35:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:59.753 00:35:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:59.753 00:35:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120424' 00:15:59.753 killing process with pid 120424 00:15:59.753 00:35:33 -- common/autotest_common.sh@955 -- # kill 120424 00:15:59.753 [2024-04-27 00:35:33.190087] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.753 00:35:33 -- common/autotest_common.sh@960 -- # wait 120424 00:15:59.753 [2024-04-27 00:35:33.190410] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.687 ************************************ 00:16:00.687 END TEST raid_state_function_test 00:16:00.687 ************************************ 00:16:00.687 00:35:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:00.687 00:16:00.687 real 0m10.579s 00:16:00.687 user 0m18.407s 00:16:00.687 sys 0m1.313s 00:16:00.687 00:35:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:00.687 00:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:00.687 00:35:34 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:16:00.687 00:35:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:00.687 00:35:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:00.687 00:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:00.946 ************************************ 00:16:00.946 START TEST raid_state_function_test_sb 00:16:00.946 ************************************ 00:16:00.946 00:35:34 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 2 true 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=120756 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120756' 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:00.946 Process raid pid: 120756 00:16:00.946 00:35:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120756 /var/tmp/spdk-raid.sock 00:16:00.946 00:35:34 -- common/autotest_common.sh@817 -- # '[' -z 120756 ']' 00:16:00.946 00:35:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:00.946 00:35:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:00.946 00:35:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:00.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:00.946 00:35:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:00.946 00:35:34 -- common/autotest_common.sh@10 -- # set +x 00:16:00.946 [2024-04-27 00:35:34.380590] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:00.946 [2024-04-27 00:35:34.381113] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.205 [2024-04-27 00:35:34.542948] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.205 [2024-04-27 00:35:34.724158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.464 [2024-04-27 00:35:34.916479] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.030 00:35:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:02.030 00:35:35 -- common/autotest_common.sh@850 -- # return 0 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:02.030 [2024-04-27 00:35:35.583125] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.030 [2024-04-27 00:35:35.583422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.030 [2024-04-27 00:35:35.583540] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.030 [2024-04-27 00:35:35.583603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.030 00:35:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.288 00:35:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.288 "name": "Existed_Raid", 00:16:02.288 "uuid": "dd34c1b8-0c65-496c-a5e3-9ac2961f9f3f", 00:16:02.288 "strip_size_kb": 64, 00:16:02.288 "state": "configuring", 00:16:02.288 "raid_level": "concat", 00:16:02.288 "superblock": true, 00:16:02.288 "num_base_bdevs": 2, 00:16:02.288 "num_base_bdevs_discovered": 0, 00:16:02.288 "num_base_bdevs_operational": 2, 00:16:02.288 "base_bdevs_list": [ 00:16:02.288 { 00:16:02.288 "name": "BaseBdev1", 00:16:02.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.288 "is_configured": false, 00:16:02.288 "data_offset": 0, 00:16:02.288 "data_size": 0 00:16:02.288 }, 00:16:02.288 { 00:16:02.288 "name": "BaseBdev2", 00:16:02.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.288 "is_configured": false, 00:16:02.288 "data_offset": 0, 00:16:02.288 "data_size": 0 00:16:02.288 } 00:16:02.288 ] 00:16:02.288 }' 00:16:02.288 00:35:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.288 00:35:35 -- common/autotest_common.sh@10 -- # set +x 00:16:03.225 00:35:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:03.225 [2024-04-27 00:35:36.811215] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.225 [2024-04-27 00:35:36.811470] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:16:03.485 00:35:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:03.743 [2024-04-27 00:35:37.087285] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.743 [2024-04-27 00:35:37.087527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.743 [2024-04-27 00:35:37.087639] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.743 [2024-04-27 00:35:37.087710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.743 00:35:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.032 [2024-04-27 00:35:37.368030] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.032 BaseBdev1 00:16:04.032 00:35:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:04.032 00:35:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:04.032 00:35:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:04.032 00:35:37 -- common/autotest_common.sh@887 -- # local i 00:16:04.032 00:35:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:04.032 00:35:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:04.032 00:35:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.032 00:35:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.300 [ 00:16:04.300 { 00:16:04.301 "name": "BaseBdev1", 00:16:04.301 "aliases": [ 00:16:04.301 "c895bde2-0b06-49a2-8046-4c9a9bd6fe1a" 00:16:04.301 ], 00:16:04.301 "product_name": "Malloc disk", 00:16:04.301 "block_size": 512, 00:16:04.301 "num_blocks": 65536, 00:16:04.301 "uuid": "c895bde2-0b06-49a2-8046-4c9a9bd6fe1a", 00:16:04.301 "assigned_rate_limits": { 00:16:04.301 "rw_ios_per_sec": 0, 00:16:04.301 "rw_mbytes_per_sec": 0, 00:16:04.301 "r_mbytes_per_sec": 0, 00:16:04.301 "w_mbytes_per_sec": 0 00:16:04.301 }, 00:16:04.301 "claimed": true, 00:16:04.301 "claim_type": "exclusive_write", 00:16:04.301 "zoned": false, 00:16:04.301 "supported_io_types": { 00:16:04.301 "read": true, 00:16:04.301 "write": true, 00:16:04.301 "unmap": true, 00:16:04.301 "write_zeroes": true, 00:16:04.301 "flush": true, 00:16:04.301 "reset": true, 00:16:04.301 "compare": false, 00:16:04.301 "compare_and_write": false, 00:16:04.301 "abort": true, 00:16:04.301 "nvme_admin": false, 00:16:04.301 "nvme_io": false 00:16:04.301 }, 00:16:04.301 "memory_domains": [ 00:16:04.301 { 00:16:04.301 "dma_device_id": "system", 00:16:04.301 "dma_device_type": 1 00:16:04.301 }, 00:16:04.301 { 00:16:04.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.301 "dma_device_type": 2 00:16:04.301 } 00:16:04.301 ], 00:16:04.301 "driver_specific": {} 00:16:04.301 } 00:16:04.301 ] 00:16:04.301 00:35:37 -- common/autotest_common.sh@893 -- # return 0 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.301 00:35:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.559 00:35:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.559 "name": "Existed_Raid", 00:16:04.559 "uuid": "5c430c4a-2079-483d-9a04-e03be9bd415e", 00:16:04.559 "strip_size_kb": 64, 00:16:04.559 "state": "configuring", 00:16:04.559 "raid_level": "concat", 00:16:04.559 "superblock": true, 00:16:04.559 "num_base_bdevs": 2, 00:16:04.559 "num_base_bdevs_discovered": 1, 00:16:04.559 "num_base_bdevs_operational": 2, 00:16:04.559 "base_bdevs_list": [ 00:16:04.559 { 00:16:04.559 "name": "BaseBdev1", 00:16:04.559 "uuid": "c895bde2-0b06-49a2-8046-4c9a9bd6fe1a", 00:16:04.559 "is_configured": true, 00:16:04.559 "data_offset": 2048, 00:16:04.559 "data_size": 63488 00:16:04.559 }, 00:16:04.559 { 00:16:04.559 "name": "BaseBdev2", 00:16:04.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.560 "is_configured": false, 00:16:04.560 "data_offset": 0, 00:16:04.560 "data_size": 0 00:16:04.560 } 00:16:04.560 ] 00:16:04.560 }' 00:16:04.560 00:35:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.560 00:35:38 -- common/autotest_common.sh@10 -- # set +x 00:16:05.126 00:35:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.384 [2024-04-27 00:35:38.876438] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.384 [2024-04-27 00:35:38.876696] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:16:05.384 00:35:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:05.384 00:35:38 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:05.641 00:35:39 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:05.899 BaseBdev1 00:16:05.899 00:35:39 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:05.899 00:35:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:05.899 00:35:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:05.899 00:35:39 -- common/autotest_common.sh@887 -- # local i 00:16:05.899 00:35:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:05.899 00:35:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:05.899 00:35:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.156 00:35:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.414 [ 00:16:06.414 { 00:16:06.414 "name": "BaseBdev1", 00:16:06.414 "aliases": [ 00:16:06.414 "cc9ebb22-e96e-4c43-bb54-62e76e61bf62" 00:16:06.414 ], 00:16:06.414 "product_name": "Malloc disk", 00:16:06.414 "block_size": 512, 00:16:06.414 "num_blocks": 65536, 00:16:06.414 "uuid": "cc9ebb22-e96e-4c43-bb54-62e76e61bf62", 00:16:06.414 "assigned_rate_limits": { 00:16:06.414 "rw_ios_per_sec": 0, 00:16:06.414 "rw_mbytes_per_sec": 0, 00:16:06.414 "r_mbytes_per_sec": 0, 00:16:06.414 "w_mbytes_per_sec": 0 00:16:06.414 }, 00:16:06.414 "claimed": false, 00:16:06.414 "zoned": false, 00:16:06.414 "supported_io_types": { 00:16:06.414 "read": true, 00:16:06.414 "write": true, 00:16:06.414 "unmap": true, 00:16:06.414 "write_zeroes": true, 00:16:06.414 "flush": true, 00:16:06.414 "reset": true, 00:16:06.414 "compare": false, 00:16:06.414 "compare_and_write": false, 00:16:06.414 "abort": true, 00:16:06.414 "nvme_admin": false, 00:16:06.414 "nvme_io": false 00:16:06.414 }, 00:16:06.414 "memory_domains": [ 00:16:06.414 { 00:16:06.414 "dma_device_id": "system", 00:16:06.414 "dma_device_type": 1 00:16:06.414 }, 00:16:06.414 { 00:16:06.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.414 "dma_device_type": 2 00:16:06.414 } 00:16:06.414 ], 00:16:06.414 "driver_specific": {} 00:16:06.414 } 00:16:06.414 ] 00:16:06.414 00:35:39 -- common/autotest_common.sh@893 -- # return 0 00:16:06.414 00:35:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:06.672 [2024-04-27 00:35:40.154808] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.672 [2024-04-27 00:35:40.157144] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.672 [2024-04-27 00:35:40.157357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.672 00:35:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.930 00:35:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.930 "name": "Existed_Raid", 00:16:06.930 "uuid": "01104a57-9afb-49b3-9cd8-7289542f2bbf", 00:16:06.930 "strip_size_kb": 64, 00:16:06.930 "state": "configuring", 00:16:06.930 "raid_level": "concat", 00:16:06.930 "superblock": true, 00:16:06.930 "num_base_bdevs": 2, 00:16:06.930 "num_base_bdevs_discovered": 1, 00:16:06.930 "num_base_bdevs_operational": 2, 00:16:06.930 "base_bdevs_list": [ 00:16:06.930 { 00:16:06.930 "name": "BaseBdev1", 00:16:06.930 "uuid": "cc9ebb22-e96e-4c43-bb54-62e76e61bf62", 00:16:06.930 "is_configured": true, 00:16:06.930 "data_offset": 2048, 00:16:06.930 "data_size": 63488 00:16:06.930 }, 00:16:06.930 { 00:16:06.930 "name": "BaseBdev2", 00:16:06.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.930 "is_configured": false, 00:16:06.930 "data_offset": 0, 00:16:06.930 "data_size": 0 00:16:06.930 } 00:16:06.930 ] 00:16:06.930 }' 00:16:06.930 00:35:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.930 00:35:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.496 00:35:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:07.754 [2024-04-27 00:35:41.295014] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.754 [2024-04-27 00:35:41.295519] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:07.754 [2024-04-27 00:35:41.295650] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:07.754 [2024-04-27 00:35:41.295820] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:07.754 [2024-04-27 00:35:41.296255] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:07.754 [2024-04-27 00:35:41.296393] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:16:07.754 [2024-04-27 00:35:41.296699] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.754 BaseBdev2 00:16:07.754 00:35:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:07.754 00:35:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:07.754 00:35:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:07.754 00:35:41 -- common/autotest_common.sh@887 -- # local i 00:16:07.754 00:35:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:07.754 00:35:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:07.754 00:35:41 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.012 00:35:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.271 [ 00:16:08.271 { 00:16:08.271 "name": "BaseBdev2", 00:16:08.271 "aliases": [ 00:16:08.271 "ae75ef32-442c-4ed6-bf87-0db913f00f7a" 00:16:08.271 ], 00:16:08.271 "product_name": "Malloc disk", 00:16:08.271 "block_size": 512, 00:16:08.271 "num_blocks": 65536, 00:16:08.271 "uuid": "ae75ef32-442c-4ed6-bf87-0db913f00f7a", 00:16:08.271 "assigned_rate_limits": { 00:16:08.271 "rw_ios_per_sec": 0, 00:16:08.271 "rw_mbytes_per_sec": 0, 00:16:08.271 "r_mbytes_per_sec": 0, 00:16:08.271 "w_mbytes_per_sec": 0 00:16:08.271 }, 00:16:08.271 "claimed": true, 00:16:08.271 "claim_type": "exclusive_write", 00:16:08.271 "zoned": false, 00:16:08.271 "supported_io_types": { 00:16:08.271 "read": true, 00:16:08.271 "write": true, 00:16:08.271 "unmap": true, 00:16:08.271 "write_zeroes": true, 00:16:08.271 "flush": true, 00:16:08.271 "reset": true, 00:16:08.271 "compare": false, 00:16:08.271 "compare_and_write": false, 00:16:08.271 "abort": true, 00:16:08.271 "nvme_admin": false, 00:16:08.271 "nvme_io": false 00:16:08.271 }, 00:16:08.271 "memory_domains": [ 00:16:08.271 { 00:16:08.271 "dma_device_id": "system", 00:16:08.271 "dma_device_type": 1 00:16:08.271 }, 00:16:08.271 { 00:16:08.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.271 "dma_device_type": 2 00:16:08.271 } 00:16:08.271 ], 00:16:08.271 "driver_specific": {} 00:16:08.271 } 00:16:08.271 ] 00:16:08.271 00:35:41 -- common/autotest_common.sh@893 -- # return 0 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.271 00:35:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.839 00:35:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.839 "name": "Existed_Raid", 00:16:08.839 "uuid": "01104a57-9afb-49b3-9cd8-7289542f2bbf", 00:16:08.839 "strip_size_kb": 64, 00:16:08.839 "state": "online", 00:16:08.839 "raid_level": "concat", 00:16:08.839 "superblock": true, 00:16:08.839 "num_base_bdevs": 2, 00:16:08.839 "num_base_bdevs_discovered": 2, 00:16:08.839 "num_base_bdevs_operational": 2, 00:16:08.839 "base_bdevs_list": [ 00:16:08.839 { 00:16:08.839 "name": "BaseBdev1", 00:16:08.839 "uuid": "cc9ebb22-e96e-4c43-bb54-62e76e61bf62", 00:16:08.839 "is_configured": true, 00:16:08.839 "data_offset": 2048, 00:16:08.839 "data_size": 63488 00:16:08.839 }, 00:16:08.839 { 00:16:08.839 "name": "BaseBdev2", 00:16:08.839 "uuid": "ae75ef32-442c-4ed6-bf87-0db913f00f7a", 00:16:08.839 "is_configured": true, 00:16:08.839 "data_offset": 2048, 00:16:08.839 "data_size": 63488 00:16:08.839 } 00:16:08.839 ] 00:16:08.839 }' 00:16:08.839 00:35:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.839 00:35:42 -- common/autotest_common.sh@10 -- # set +x 00:16:09.406 00:35:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:09.664 [2024-04-27 00:35:43.075563] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.664 [2024-04-27 00:35:43.075772] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.664 [2024-04-27 00:35:43.075927] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.664 00:35:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.924 00:35:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.924 "name": "Existed_Raid", 00:16:09.924 "uuid": "01104a57-9afb-49b3-9cd8-7289542f2bbf", 00:16:09.924 "strip_size_kb": 64, 00:16:09.924 "state": "offline", 00:16:09.924 "raid_level": "concat", 00:16:09.924 "superblock": true, 00:16:09.924 "num_base_bdevs": 2, 00:16:09.924 "num_base_bdevs_discovered": 1, 00:16:09.924 "num_base_bdevs_operational": 1, 00:16:09.924 "base_bdevs_list": [ 00:16:09.924 { 00:16:09.924 "name": null, 00:16:09.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.924 "is_configured": false, 00:16:09.924 "data_offset": 2048, 00:16:09.924 "data_size": 63488 00:16:09.924 }, 00:16:09.924 { 00:16:09.924 "name": "BaseBdev2", 00:16:09.924 "uuid": "ae75ef32-442c-4ed6-bf87-0db913f00f7a", 00:16:09.924 "is_configured": true, 00:16:09.924 "data_offset": 2048, 00:16:09.924 "data_size": 63488 00:16:09.924 } 00:16:09.924 ] 00:16:09.924 }' 00:16:09.924 00:35:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.924 00:35:43 -- common/autotest_common.sh@10 -- # set +x 00:16:10.510 00:35:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:10.510 00:35:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:10.510 00:35:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.510 00:35:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:10.769 00:35:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:10.769 00:35:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.769 00:35:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:11.027 [2024-04-27 00:35:44.538092] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:11.027 [2024-04-27 00:35:44.538520] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:16:11.285 00:35:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:11.285 00:35:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:11.285 00:35:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.285 00:35:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:11.285 00:35:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:11.285 00:35:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:11.285 00:35:44 -- bdev/bdev_raid.sh@287 -- # killprocess 120756 00:16:11.285 00:35:44 -- common/autotest_common.sh@936 -- # '[' -z 120756 ']' 00:16:11.285 00:35:44 -- common/autotest_common.sh@940 -- # kill -0 120756 00:16:11.285 00:35:44 -- common/autotest_common.sh@941 -- # uname 00:16:11.285 00:35:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.285 00:35:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 120756 00:16:11.285 00:35:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.285 00:35:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.285 00:35:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 120756' 00:16:11.285 killing process with pid 120756 00:16:11.285 00:35:44 -- common/autotest_common.sh@955 -- # kill 120756 00:16:11.285 [2024-04-27 00:35:44.860055] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.285 00:35:44 -- common/autotest_common.sh@960 -- # wait 120756 00:16:11.285 [2024-04-27 00:35:44.860528] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:12.663 00:16:12.663 real 0m11.554s 00:16:12.663 user 0m20.068s 00:16:12.663 sys 0m1.471s 00:16:12.663 00:35:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:12.663 00:35:45 -- common/autotest_common.sh@10 -- # set +x 00:16:12.663 ************************************ 00:16:12.663 END TEST raid_state_function_test_sb 00:16:12.663 ************************************ 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:16:12.663 00:35:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:12.663 00:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.663 00:35:45 -- common/autotest_common.sh@10 -- # set +x 00:16:12.663 ************************************ 00:16:12.663 START TEST raid_superblock_test 00:16:12.663 ************************************ 00:16:12.663 00:35:45 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 2 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@357 -- # raid_pid=121096 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:12.663 00:35:45 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121096 /var/tmp/spdk-raid.sock 00:16:12.663 00:35:45 -- common/autotest_common.sh@817 -- # '[' -z 121096 ']' 00:16:12.663 00:35:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:12.663 00:35:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:12.663 00:35:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:12.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:12.663 00:35:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:12.663 00:35:45 -- common/autotest_common.sh@10 -- # set +x 00:16:12.663 [2024-04-27 00:35:46.025088] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:12.663 [2024-04-27 00:35:46.025506] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121096 ] 00:16:12.663 [2024-04-27 00:35:46.197022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.923 [2024-04-27 00:35:46.443695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.181 [2024-04-27 00:35:46.631918] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.439 00:35:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.439 00:35:46 -- common/autotest_common.sh@850 -- # return 0 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.439 00:35:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:13.697 malloc1 00:16:13.697 00:35:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:13.955 [2024-04-27 00:35:47.475526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:13.955 [2024-04-27 00:35:47.475771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.955 [2024-04-27 00:35:47.475845] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:13.955 [2024-04-27 00:35:47.476133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.955 [2024-04-27 00:35:47.478606] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.955 [2024-04-27 00:35:47.478815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:13.955 pt1 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.955 00:35:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:14.213 malloc2 00:16:14.213 00:35:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:14.471 [2024-04-27 00:35:47.979801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:14.471 [2024-04-27 00:35:47.980040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.471 [2024-04-27 00:35:47.980125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:14.471 [2024-04-27 00:35:47.980446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.471 [2024-04-27 00:35:47.983008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.471 [2024-04-27 00:35:47.983198] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:14.471 pt2 00:16:14.471 00:35:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:14.471 00:35:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:14.471 00:35:47 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:16:14.729 [2024-04-27 00:35:48.184023] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:14.729 [2024-04-27 00:35:48.186285] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:14.729 [2024-04-27 00:35:48.186711] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:16:14.729 [2024-04-27 00:35:48.186920] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:14.729 [2024-04-27 00:35:48.187089] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:14.729 [2024-04-27 00:35:48.187515] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:16:14.729 [2024-04-27 00:35:48.187631] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:16:14.729 [2024-04-27 00:35:48.187916] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.729 00:35:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.986 00:35:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.986 "name": "raid_bdev1", 00:16:14.986 "uuid": "f4fa0444-968d-49b8-9a89-67aef0e557f6", 00:16:14.986 "strip_size_kb": 64, 00:16:14.986 "state": "online", 00:16:14.986 "raid_level": "concat", 00:16:14.986 "superblock": true, 00:16:14.986 "num_base_bdevs": 2, 00:16:14.986 "num_base_bdevs_discovered": 2, 00:16:14.986 "num_base_bdevs_operational": 2, 00:16:14.986 "base_bdevs_list": [ 00:16:14.986 { 00:16:14.986 "name": "pt1", 00:16:14.986 "uuid": "6199d1f7-6ce9-527a-9ddd-828a51099350", 00:16:14.986 "is_configured": true, 00:16:14.986 "data_offset": 2048, 00:16:14.986 "data_size": 63488 00:16:14.986 }, 00:16:14.986 { 00:16:14.986 "name": "pt2", 00:16:14.986 "uuid": "6817bd93-3585-5368-8632-19e2daa5080a", 00:16:14.986 "is_configured": true, 00:16:14.986 "data_offset": 2048, 00:16:14.986 "data_size": 63488 00:16:14.986 } 00:16:14.986 ] 00:16:14.986 }' 00:16:14.986 00:35:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.986 00:35:48 -- common/autotest_common.sh@10 -- # set +x 00:16:15.551 00:35:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:15.551 00:35:49 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:15.809 [2024-04-27 00:35:49.320511] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:15.809 00:35:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f4fa0444-968d-49b8-9a89-67aef0e557f6 00:16:15.809 00:35:49 -- bdev/bdev_raid.sh@380 -- # '[' -z f4fa0444-968d-49b8-9a89-67aef0e557f6 ']' 00:16:15.809 00:35:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:16.068 [2024-04-27 00:35:49.588313] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.068 [2024-04-27 00:35:49.588521] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.068 [2024-04-27 00:35:49.588698] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.068 [2024-04-27 00:35:49.588916] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.068 [2024-04-27 00:35:49.589024] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:16:16.068 00:35:49 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.068 00:35:49 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:16.326 00:35:49 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:16.326 00:35:49 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:16.326 00:35:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.326 00:35:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:16.584 00:35:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.584 00:35:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:16.844 00:35:50 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:16.844 00:35:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:17.103 00:35:50 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:17.103 00:35:50 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:17.103 00:35:50 -- common/autotest_common.sh@638 -- # local es=0 00:16:17.103 00:35:50 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:17.103 00:35:50 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:17.103 00:35:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:17.103 00:35:50 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:17.103 00:35:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:17.103 00:35:50 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:17.103 00:35:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:17.103 00:35:50 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:17.103 00:35:50 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:17.103 00:35:50 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:17.361 [2024-04-27 00:35:50.768573] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:17.361 [2024-04-27 00:35:50.770787] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:17.361 [2024-04-27 00:35:50.771007] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:17.361 [2024-04-27 00:35:50.771202] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:17.361 [2024-04-27 00:35:50.771367] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.361 [2024-04-27 00:35:50.771414] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:16:17.361 request: 00:16:17.361 { 00:16:17.361 "name": "raid_bdev1", 00:16:17.361 "raid_level": "concat", 00:16:17.361 "base_bdevs": [ 00:16:17.361 "malloc1", 00:16:17.361 "malloc2" 00:16:17.361 ], 00:16:17.361 "superblock": false, 00:16:17.361 "strip_size_kb": 64, 00:16:17.361 "method": "bdev_raid_create", 00:16:17.361 "req_id": 1 00:16:17.361 } 00:16:17.361 Got JSON-RPC error response 00:16:17.361 response: 00:16:17.361 { 00:16:17.361 "code": -17, 00:16:17.361 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:17.361 } 00:16:17.361 00:35:50 -- common/autotest_common.sh@641 -- # es=1 00:16:17.361 00:35:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:17.361 00:35:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:17.361 00:35:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:17.361 00:35:50 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.361 00:35:50 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:17.620 00:35:50 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:17.620 00:35:50 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:17.620 00:35:50 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.620 [2024-04-27 00:35:51.184644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.620 [2024-04-27 00:35:51.185009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.620 [2024-04-27 00:35:51.185165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:17.620 [2024-04-27 00:35:51.185324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.620 [2024-04-27 00:35:51.188109] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.620 [2024-04-27 00:35:51.188342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.620 [2024-04-27 00:35:51.188561] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:17.620 [2024-04-27 00:35:51.188757] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.620 pt1 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.620 00:35:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.186 00:35:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.186 "name": "raid_bdev1", 00:16:18.186 "uuid": "f4fa0444-968d-49b8-9a89-67aef0e557f6", 00:16:18.186 "strip_size_kb": 64, 00:16:18.186 "state": "configuring", 00:16:18.186 "raid_level": "concat", 00:16:18.186 "superblock": true, 00:16:18.186 "num_base_bdevs": 2, 00:16:18.186 "num_base_bdevs_discovered": 1, 00:16:18.186 "num_base_bdevs_operational": 2, 00:16:18.186 "base_bdevs_list": [ 00:16:18.186 { 00:16:18.187 "name": "pt1", 00:16:18.187 "uuid": "6199d1f7-6ce9-527a-9ddd-828a51099350", 00:16:18.187 "is_configured": true, 00:16:18.187 "data_offset": 2048, 00:16:18.187 "data_size": 63488 00:16:18.187 }, 00:16:18.187 { 00:16:18.187 "name": null, 00:16:18.187 "uuid": "6817bd93-3585-5368-8632-19e2daa5080a", 00:16:18.187 "is_configured": false, 00:16:18.187 "data_offset": 2048, 00:16:18.187 "data_size": 63488 00:16:18.187 } 00:16:18.187 ] 00:16:18.187 }' 00:16:18.187 00:35:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.187 00:35:51 -- common/autotest_common.sh@10 -- # set +x 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.754 [2024-04-27 00:35:52.308946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.754 [2024-04-27 00:35:52.309196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.754 [2024-04-27 00:35:52.309384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:18.754 [2024-04-27 00:35:52.309510] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.754 [2024-04-27 00:35:52.310043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.754 [2024-04-27 00:35:52.310207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.754 [2024-04-27 00:35:52.310495] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:18.754 [2024-04-27 00:35:52.310644] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.754 [2024-04-27 00:35:52.310901] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:18.754 [2024-04-27 00:35:52.311015] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:18.754 [2024-04-27 00:35:52.311163] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:18.754 [2024-04-27 00:35:52.311516] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:18.754 [2024-04-27 00:35:52.311631] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:16:18.754 [2024-04-27 00:35:52.311855] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.754 pt2 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.754 00:35:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.012 00:35:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.012 "name": "raid_bdev1", 00:16:19.012 "uuid": "f4fa0444-968d-49b8-9a89-67aef0e557f6", 00:16:19.012 "strip_size_kb": 64, 00:16:19.012 "state": "online", 00:16:19.012 "raid_level": "concat", 00:16:19.012 "superblock": true, 00:16:19.012 "num_base_bdevs": 2, 00:16:19.012 "num_base_bdevs_discovered": 2, 00:16:19.012 "num_base_bdevs_operational": 2, 00:16:19.012 "base_bdevs_list": [ 00:16:19.012 { 00:16:19.012 "name": "pt1", 00:16:19.012 "uuid": "6199d1f7-6ce9-527a-9ddd-828a51099350", 00:16:19.012 "is_configured": true, 00:16:19.012 "data_offset": 2048, 00:16:19.012 "data_size": 63488 00:16:19.012 }, 00:16:19.012 { 00:16:19.012 "name": "pt2", 00:16:19.012 "uuid": "6817bd93-3585-5368-8632-19e2daa5080a", 00:16:19.012 "is_configured": true, 00:16:19.012 "data_offset": 2048, 00:16:19.012 "data_size": 63488 00:16:19.012 } 00:16:19.012 ] 00:16:19.012 }' 00:16:19.012 00:35:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.012 00:35:52 -- common/autotest_common.sh@10 -- # set +x 00:16:19.578 00:35:53 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:19.578 00:35:53 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.836 [2024-04-27 00:35:53.321415] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.836 00:35:53 -- bdev/bdev_raid.sh@430 -- # '[' f4fa0444-968d-49b8-9a89-67aef0e557f6 '!=' f4fa0444-968d-49b8-9a89-67aef0e557f6 ']' 00:16:19.836 00:35:53 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:19.836 00:35:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:19.836 00:35:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:19.836 00:35:53 -- bdev/bdev_raid.sh@511 -- # killprocess 121096 00:16:19.836 00:35:53 -- common/autotest_common.sh@936 -- # '[' -z 121096 ']' 00:16:19.836 00:35:53 -- common/autotest_common.sh@940 -- # kill -0 121096 00:16:19.836 00:35:53 -- common/autotest_common.sh@941 -- # uname 00:16:19.836 00:35:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.836 00:35:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121096 00:16:19.836 killing process with pid 121096 00:16:19.836 00:35:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:19.836 00:35:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:19.836 00:35:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121096' 00:16:19.836 00:35:53 -- common/autotest_common.sh@955 -- # kill 121096 00:16:19.836 00:35:53 -- common/autotest_common.sh@960 -- # wait 121096 00:16:19.836 [2024-04-27 00:35:53.364155] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.836 [2024-04-27 00:35:53.364227] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.836 [2024-04-27 00:35:53.364280] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.836 [2024-04-27 00:35:53.364306] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:16:20.095 [2024-04-27 00:35:53.504769] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:21.029 00:16:21.029 real 0m8.528s 00:16:21.029 user 0m14.569s 00:16:21.029 sys 0m1.031s 00:16:21.029 00:35:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.029 ************************************ 00:16:21.029 END TEST raid_superblock_test 00:16:21.029 ************************************ 00:16:21.029 00:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:21.029 00:35:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:21.029 00:35:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.029 00:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.029 ************************************ 00:16:21.029 START TEST raid_state_function_test 00:16:21.029 ************************************ 00:16:21.029 00:35:54 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 false 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=121358 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:21.029 Process raid pid: 121358 00:16:21.029 00:35:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121358' 00:16:21.030 00:35:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121358 /var/tmp/spdk-raid.sock 00:16:21.030 00:35:54 -- common/autotest_common.sh@817 -- # '[' -z 121358 ']' 00:16:21.030 00:35:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:21.030 00:35:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:21.030 00:35:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:21.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:21.030 00:35:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:21.030 00:35:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.287 [2024-04-27 00:35:54.635056] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:21.287 [2024-04-27 00:35:54.635420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.287 [2024-04-27 00:35:54.793150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.545 [2024-04-27 00:35:54.978813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.802 [2024-04-27 00:35:55.165650] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.060 00:35:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:22.060 00:35:55 -- common/autotest_common.sh@850 -- # return 0 00:16:22.060 00:35:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:22.319 [2024-04-27 00:35:55.798523] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.319 [2024-04-27 00:35:55.798781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.319 [2024-04-27 00:35:55.798965] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.319 [2024-04-27 00:35:55.799030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.319 00:35:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.577 00:35:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.577 "name": "Existed_Raid", 00:16:22.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.577 "strip_size_kb": 0, 00:16:22.577 "state": "configuring", 00:16:22.577 "raid_level": "raid1", 00:16:22.577 "superblock": false, 00:16:22.577 "num_base_bdevs": 2, 00:16:22.577 "num_base_bdevs_discovered": 0, 00:16:22.577 "num_base_bdevs_operational": 2, 00:16:22.577 "base_bdevs_list": [ 00:16:22.577 { 00:16:22.577 "name": "BaseBdev1", 00:16:22.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.577 "is_configured": false, 00:16:22.577 "data_offset": 0, 00:16:22.577 "data_size": 0 00:16:22.577 }, 00:16:22.577 { 00:16:22.577 "name": "BaseBdev2", 00:16:22.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.577 "is_configured": false, 00:16:22.577 "data_offset": 0, 00:16:22.577 "data_size": 0 00:16:22.577 } 00:16:22.577 ] 00:16:22.577 }' 00:16:22.577 00:35:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.577 00:35:56 -- common/autotest_common.sh@10 -- # set +x 00:16:23.145 00:35:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:23.404 [2024-04-27 00:35:56.890700] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.404 [2024-04-27 00:35:56.890935] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:16:23.404 00:35:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:23.662 [2024-04-27 00:35:57.150724] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.662 [2024-04-27 00:35:57.151021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.662 [2024-04-27 00:35:57.151165] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.662 [2024-04-27 00:35:57.151240] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.662 00:35:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.921 [2024-04-27 00:35:57.377472] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.921 BaseBdev1 00:16:23.921 00:35:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:23.921 00:35:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:23.921 00:35:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:23.921 00:35:57 -- common/autotest_common.sh@887 -- # local i 00:16:23.921 00:35:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:23.921 00:35:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:23.921 00:35:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:24.181 00:35:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.440 [ 00:16:24.440 { 00:16:24.440 "name": "BaseBdev1", 00:16:24.440 "aliases": [ 00:16:24.440 "6f831b99-c577-4081-8aff-79bcd0ee2a18" 00:16:24.440 ], 00:16:24.440 "product_name": "Malloc disk", 00:16:24.440 "block_size": 512, 00:16:24.440 "num_blocks": 65536, 00:16:24.440 "uuid": "6f831b99-c577-4081-8aff-79bcd0ee2a18", 00:16:24.440 "assigned_rate_limits": { 00:16:24.440 "rw_ios_per_sec": 0, 00:16:24.440 "rw_mbytes_per_sec": 0, 00:16:24.440 "r_mbytes_per_sec": 0, 00:16:24.440 "w_mbytes_per_sec": 0 00:16:24.440 }, 00:16:24.440 "claimed": true, 00:16:24.440 "claim_type": "exclusive_write", 00:16:24.440 "zoned": false, 00:16:24.440 "supported_io_types": { 00:16:24.440 "read": true, 00:16:24.440 "write": true, 00:16:24.440 "unmap": true, 00:16:24.440 "write_zeroes": true, 00:16:24.440 "flush": true, 00:16:24.440 "reset": true, 00:16:24.440 "compare": false, 00:16:24.440 "compare_and_write": false, 00:16:24.440 "abort": true, 00:16:24.440 "nvme_admin": false, 00:16:24.440 "nvme_io": false 00:16:24.440 }, 00:16:24.440 "memory_domains": [ 00:16:24.440 { 00:16:24.440 "dma_device_id": "system", 00:16:24.440 "dma_device_type": 1 00:16:24.440 }, 00:16:24.440 { 00:16:24.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.440 "dma_device_type": 2 00:16:24.440 } 00:16:24.440 ], 00:16:24.440 "driver_specific": {} 00:16:24.440 } 00:16:24.440 ] 00:16:24.440 00:35:57 -- common/autotest_common.sh@893 -- # return 0 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.440 00:35:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.699 00:35:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.699 "name": "Existed_Raid", 00:16:24.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.699 "strip_size_kb": 0, 00:16:24.699 "state": "configuring", 00:16:24.699 "raid_level": "raid1", 00:16:24.699 "superblock": false, 00:16:24.699 "num_base_bdevs": 2, 00:16:24.699 "num_base_bdevs_discovered": 1, 00:16:24.699 "num_base_bdevs_operational": 2, 00:16:24.699 "base_bdevs_list": [ 00:16:24.699 { 00:16:24.699 "name": "BaseBdev1", 00:16:24.699 "uuid": "6f831b99-c577-4081-8aff-79bcd0ee2a18", 00:16:24.699 "is_configured": true, 00:16:24.699 "data_offset": 0, 00:16:24.699 "data_size": 65536 00:16:24.699 }, 00:16:24.699 { 00:16:24.699 "name": "BaseBdev2", 00:16:24.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.699 "is_configured": false, 00:16:24.699 "data_offset": 0, 00:16:24.699 "data_size": 0 00:16:24.699 } 00:16:24.699 ] 00:16:24.699 }' 00:16:24.699 00:35:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.699 00:35:58 -- common/autotest_common.sh@10 -- # set +x 00:16:25.265 00:35:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:25.265 [2024-04-27 00:35:58.837847] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.265 [2024-04-27 00:35:58.838105] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:16:25.524 00:35:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:25.524 00:35:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:25.524 [2024-04-27 00:35:59.033907] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.524 [2024-04-27 00:35:59.036055] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.524 [2024-04-27 00:35:59.036258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.524 00:35:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.782 00:35:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.782 "name": "Existed_Raid", 00:16:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.782 "strip_size_kb": 0, 00:16:25.782 "state": "configuring", 00:16:25.782 "raid_level": "raid1", 00:16:25.782 "superblock": false, 00:16:25.782 "num_base_bdevs": 2, 00:16:25.782 "num_base_bdevs_discovered": 1, 00:16:25.782 "num_base_bdevs_operational": 2, 00:16:25.782 "base_bdevs_list": [ 00:16:25.782 { 00:16:25.782 "name": "BaseBdev1", 00:16:25.782 "uuid": "6f831b99-c577-4081-8aff-79bcd0ee2a18", 00:16:25.782 "is_configured": true, 00:16:25.782 "data_offset": 0, 00:16:25.782 "data_size": 65536 00:16:25.782 }, 00:16:25.782 { 00:16:25.782 "name": "BaseBdev2", 00:16:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.782 "is_configured": false, 00:16:25.782 "data_offset": 0, 00:16:25.782 "data_size": 0 00:16:25.782 } 00:16:25.782 ] 00:16:25.782 }' 00:16:25.782 00:35:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.782 00:35:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.348 00:35:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.606 [2024-04-27 00:36:00.168898] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.606 [2024-04-27 00:36:00.169228] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:26.606 [2024-04-27 00:36:00.169367] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:26.606 [2024-04-27 00:36:00.169574] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:26.606 [2024-04-27 00:36:00.170053] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:26.606 [2024-04-27 00:36:00.170189] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:16:26.606 [2024-04-27 00:36:00.170622] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.606 BaseBdev2 00:16:26.606 00:36:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:26.606 00:36:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:26.606 00:36:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:26.606 00:36:00 -- common/autotest_common.sh@887 -- # local i 00:16:26.606 00:36:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:26.606 00:36:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:26.606 00:36:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:26.864 00:36:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.123 [ 00:16:27.123 { 00:16:27.123 "name": "BaseBdev2", 00:16:27.123 "aliases": [ 00:16:27.123 "9d449cc3-f2ce-47c0-84e5-696610f77eb3" 00:16:27.123 ], 00:16:27.123 "product_name": "Malloc disk", 00:16:27.123 "block_size": 512, 00:16:27.123 "num_blocks": 65536, 00:16:27.123 "uuid": "9d449cc3-f2ce-47c0-84e5-696610f77eb3", 00:16:27.123 "assigned_rate_limits": { 00:16:27.123 "rw_ios_per_sec": 0, 00:16:27.123 "rw_mbytes_per_sec": 0, 00:16:27.123 "r_mbytes_per_sec": 0, 00:16:27.123 "w_mbytes_per_sec": 0 00:16:27.123 }, 00:16:27.123 "claimed": true, 00:16:27.123 "claim_type": "exclusive_write", 00:16:27.123 "zoned": false, 00:16:27.123 "supported_io_types": { 00:16:27.123 "read": true, 00:16:27.123 "write": true, 00:16:27.123 "unmap": true, 00:16:27.123 "write_zeroes": true, 00:16:27.123 "flush": true, 00:16:27.123 "reset": true, 00:16:27.123 "compare": false, 00:16:27.123 "compare_and_write": false, 00:16:27.123 "abort": true, 00:16:27.123 "nvme_admin": false, 00:16:27.123 "nvme_io": false 00:16:27.123 }, 00:16:27.123 "memory_domains": [ 00:16:27.123 { 00:16:27.123 "dma_device_id": "system", 00:16:27.123 "dma_device_type": 1 00:16:27.123 }, 00:16:27.123 { 00:16:27.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.123 "dma_device_type": 2 00:16:27.123 } 00:16:27.123 ], 00:16:27.123 "driver_specific": {} 00:16:27.123 } 00:16:27.123 ] 00:16:27.123 00:36:00 -- common/autotest_common.sh@893 -- # return 0 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.123 00:36:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.382 00:36:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.382 "name": "Existed_Raid", 00:16:27.382 "uuid": "b52ac46e-2ecf-41ae-8ad4-818f1947209a", 00:16:27.382 "strip_size_kb": 0, 00:16:27.382 "state": "online", 00:16:27.382 "raid_level": "raid1", 00:16:27.382 "superblock": false, 00:16:27.382 "num_base_bdevs": 2, 00:16:27.382 "num_base_bdevs_discovered": 2, 00:16:27.382 "num_base_bdevs_operational": 2, 00:16:27.382 "base_bdevs_list": [ 00:16:27.382 { 00:16:27.382 "name": "BaseBdev1", 00:16:27.382 "uuid": "6f831b99-c577-4081-8aff-79bcd0ee2a18", 00:16:27.382 "is_configured": true, 00:16:27.382 "data_offset": 0, 00:16:27.382 "data_size": 65536 00:16:27.382 }, 00:16:27.382 { 00:16:27.382 "name": "BaseBdev2", 00:16:27.382 "uuid": "9d449cc3-f2ce-47c0-84e5-696610f77eb3", 00:16:27.382 "is_configured": true, 00:16:27.382 "data_offset": 0, 00:16:27.382 "data_size": 65536 00:16:27.382 } 00:16:27.382 ] 00:16:27.382 }' 00:16:27.382 00:36:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.382 00:36:00 -- common/autotest_common.sh@10 -- # set +x 00:16:28.316 00:36:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:28.316 [2024-04-27 00:36:01.853542] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.574 00:36:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.831 00:36:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.831 "name": "Existed_Raid", 00:16:28.831 "uuid": "b52ac46e-2ecf-41ae-8ad4-818f1947209a", 00:16:28.831 "strip_size_kb": 0, 00:16:28.831 "state": "online", 00:16:28.831 "raid_level": "raid1", 00:16:28.832 "superblock": false, 00:16:28.832 "num_base_bdevs": 2, 00:16:28.832 "num_base_bdevs_discovered": 1, 00:16:28.832 "num_base_bdevs_operational": 1, 00:16:28.832 "base_bdevs_list": [ 00:16:28.832 { 00:16:28.832 "name": null, 00:16:28.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.832 "is_configured": false, 00:16:28.832 "data_offset": 0, 00:16:28.832 "data_size": 65536 00:16:28.832 }, 00:16:28.832 { 00:16:28.832 "name": "BaseBdev2", 00:16:28.832 "uuid": "9d449cc3-f2ce-47c0-84e5-696610f77eb3", 00:16:28.832 "is_configured": true, 00:16:28.832 "data_offset": 0, 00:16:28.832 "data_size": 65536 00:16:28.832 } 00:16:28.832 ] 00:16:28.832 }' 00:16:28.832 00:36:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.832 00:36:02 -- common/autotest_common.sh@10 -- # set +x 00:16:29.396 00:36:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:29.396 00:36:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:29.396 00:36:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.396 00:36:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:29.653 00:36:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:29.653 00:36:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:29.653 00:36:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:29.910 [2024-04-27 00:36:03.334622] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.910 [2024-04-27 00:36:03.335026] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.910 [2024-04-27 00:36:03.410628] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.910 [2024-04-27 00:36:03.411071] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.911 [2024-04-27 00:36:03.411241] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:16:29.911 00:36:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:29.911 00:36:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:29.911 00:36:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.911 00:36:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:30.169 00:36:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:30.169 00:36:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:30.169 00:36:03 -- bdev/bdev_raid.sh@287 -- # killprocess 121358 00:16:30.169 00:36:03 -- common/autotest_common.sh@936 -- # '[' -z 121358 ']' 00:16:30.169 00:36:03 -- common/autotest_common.sh@940 -- # kill -0 121358 00:16:30.169 00:36:03 -- common/autotest_common.sh@941 -- # uname 00:16:30.169 00:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.169 00:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121358 00:16:30.169 00:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:30.169 00:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:30.169 00:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121358' 00:16:30.169 killing process with pid 121358 00:16:30.169 00:36:03 -- common/autotest_common.sh@955 -- # kill 121358 00:16:30.169 00:36:03 -- common/autotest_common.sh@960 -- # wait 121358 00:16:30.169 [2024-04-27 00:36:03.700681] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.169 [2024-04-27 00:36:03.700808] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.103 ************************************ 00:16:31.103 END TEST raid_state_function_test 00:16:31.103 ************************************ 00:16:31.103 00:36:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:31.103 00:16:31.103 real 0m10.102s 00:16:31.103 user 0m17.620s 00:16:31.103 sys 0m1.173s 00:16:31.103 00:36:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.103 00:36:04 -- common/autotest_common.sh@10 -- # set +x 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:31.362 00:36:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:31.362 00:36:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.362 00:36:04 -- common/autotest_common.sh@10 -- # set +x 00:16:31.362 ************************************ 00:16:31.362 START TEST raid_state_function_test_sb 00:16:31.362 ************************************ 00:16:31.362 00:36:04 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 2 true 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=121677 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121677' 00:16:31.362 Process raid pid: 121677 00:16:31.362 00:36:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121677 /var/tmp/spdk-raid.sock 00:16:31.362 00:36:04 -- common/autotest_common.sh@817 -- # '[' -z 121677 ']' 00:16:31.362 00:36:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:31.362 00:36:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:31.362 00:36:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:31.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:31.362 00:36:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:31.362 00:36:04 -- common/autotest_common.sh@10 -- # set +x 00:16:31.362 [2024-04-27 00:36:04.823436] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:31.362 [2024-04-27 00:36:04.823802] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.620 [2024-04-27 00:36:04.982108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.620 [2024-04-27 00:36:05.170665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.879 [2024-04-27 00:36:05.345626] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.446 00:36:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:32.446 00:36:05 -- common/autotest_common.sh@850 -- # return 0 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:32.446 [2024-04-27 00:36:05.950920] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.446 [2024-04-27 00:36:05.951501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.446 [2024-04-27 00:36:05.951654] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.446 [2024-04-27 00:36:05.951821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.446 00:36:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.704 00:36:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.704 "name": "Existed_Raid", 00:16:32.704 "uuid": "dcf0533b-fb78-43cb-b102-4b779a4f010c", 00:16:32.704 "strip_size_kb": 0, 00:16:32.704 "state": "configuring", 00:16:32.704 "raid_level": "raid1", 00:16:32.704 "superblock": true, 00:16:32.704 "num_base_bdevs": 2, 00:16:32.704 "num_base_bdevs_discovered": 0, 00:16:32.704 "num_base_bdevs_operational": 2, 00:16:32.704 "base_bdevs_list": [ 00:16:32.704 { 00:16:32.704 "name": "BaseBdev1", 00:16:32.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.704 "is_configured": false, 00:16:32.704 "data_offset": 0, 00:16:32.704 "data_size": 0 00:16:32.704 }, 00:16:32.704 { 00:16:32.704 "name": "BaseBdev2", 00:16:32.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.704 "is_configured": false, 00:16:32.704 "data_offset": 0, 00:16:32.704 "data_size": 0 00:16:32.704 } 00:16:32.704 ] 00:16:32.704 }' 00:16:32.704 00:36:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.704 00:36:06 -- common/autotest_common.sh@10 -- # set +x 00:16:33.271 00:36:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:33.528 [2024-04-27 00:36:07.063197] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.528 [2024-04-27 00:36:07.063447] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:16:33.528 00:36:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:33.786 [2024-04-27 00:36:07.263256] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.786 [2024-04-27 00:36:07.263938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.786 [2024-04-27 00:36:07.264143] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:33.786 [2024-04-27 00:36:07.264329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:33.786 00:36:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:34.044 [2024-04-27 00:36:07.515442] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.044 BaseBdev1 00:16:34.044 00:36:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:34.044 00:36:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:34.044 00:36:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:34.044 00:36:07 -- common/autotest_common.sh@887 -- # local i 00:16:34.044 00:36:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:34.044 00:36:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:34.044 00:36:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.302 00:36:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.560 [ 00:16:34.560 { 00:16:34.560 "name": "BaseBdev1", 00:16:34.560 "aliases": [ 00:16:34.560 "fd56d970-918a-47f4-8e0b-3d63c000cab7" 00:16:34.560 ], 00:16:34.560 "product_name": "Malloc disk", 00:16:34.560 "block_size": 512, 00:16:34.560 "num_blocks": 65536, 00:16:34.560 "uuid": "fd56d970-918a-47f4-8e0b-3d63c000cab7", 00:16:34.560 "assigned_rate_limits": { 00:16:34.560 "rw_ios_per_sec": 0, 00:16:34.560 "rw_mbytes_per_sec": 0, 00:16:34.560 "r_mbytes_per_sec": 0, 00:16:34.560 "w_mbytes_per_sec": 0 00:16:34.560 }, 00:16:34.560 "claimed": true, 00:16:34.560 "claim_type": "exclusive_write", 00:16:34.560 "zoned": false, 00:16:34.560 "supported_io_types": { 00:16:34.560 "read": true, 00:16:34.560 "write": true, 00:16:34.560 "unmap": true, 00:16:34.560 "write_zeroes": true, 00:16:34.560 "flush": true, 00:16:34.560 "reset": true, 00:16:34.560 "compare": false, 00:16:34.560 "compare_and_write": false, 00:16:34.560 "abort": true, 00:16:34.560 "nvme_admin": false, 00:16:34.560 "nvme_io": false 00:16:34.560 }, 00:16:34.560 "memory_domains": [ 00:16:34.560 { 00:16:34.560 "dma_device_id": "system", 00:16:34.560 "dma_device_type": 1 00:16:34.560 }, 00:16:34.560 { 00:16:34.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.560 "dma_device_type": 2 00:16:34.560 } 00:16:34.560 ], 00:16:34.560 "driver_specific": {} 00:16:34.560 } 00:16:34.560 ] 00:16:34.560 00:36:08 -- common/autotest_common.sh@893 -- # return 0 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.560 00:36:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.818 00:36:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.819 "name": "Existed_Raid", 00:16:34.819 "uuid": "c2da6a4a-983f-4a05-aa13-c9cbe914bffd", 00:16:34.819 "strip_size_kb": 0, 00:16:34.819 "state": "configuring", 00:16:34.819 "raid_level": "raid1", 00:16:34.819 "superblock": true, 00:16:34.819 "num_base_bdevs": 2, 00:16:34.819 "num_base_bdevs_discovered": 1, 00:16:34.819 "num_base_bdevs_operational": 2, 00:16:34.819 "base_bdevs_list": [ 00:16:34.819 { 00:16:34.819 "name": "BaseBdev1", 00:16:34.819 "uuid": "fd56d970-918a-47f4-8e0b-3d63c000cab7", 00:16:34.819 "is_configured": true, 00:16:34.819 "data_offset": 2048, 00:16:34.819 "data_size": 63488 00:16:34.819 }, 00:16:34.819 { 00:16:34.819 "name": "BaseBdev2", 00:16:34.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.819 "is_configured": false, 00:16:34.819 "data_offset": 0, 00:16:34.819 "data_size": 0 00:16:34.819 } 00:16:34.819 ] 00:16:34.819 }' 00:16:34.819 00:36:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.819 00:36:08 -- common/autotest_common.sh@10 -- # set +x 00:16:35.384 00:36:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:35.643 [2024-04-27 00:36:09.027899] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.643 [2024-04-27 00:36:09.028232] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:16:35.643 00:36:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:35.643 00:36:09 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:35.901 00:36:09 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.159 BaseBdev1 00:16:36.159 00:36:09 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:36.159 00:36:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:36.159 00:36:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:36.159 00:36:09 -- common/autotest_common.sh@887 -- # local i 00:16:36.159 00:36:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:36.159 00:36:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:36.159 00:36:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.417 00:36:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.674 [ 00:16:36.674 { 00:16:36.674 "name": "BaseBdev1", 00:16:36.675 "aliases": [ 00:16:36.675 "31cc3aa2-5693-4ee9-9c6b-63c4ff9f1e39" 00:16:36.675 ], 00:16:36.675 "product_name": "Malloc disk", 00:16:36.675 "block_size": 512, 00:16:36.675 "num_blocks": 65536, 00:16:36.675 "uuid": "31cc3aa2-5693-4ee9-9c6b-63c4ff9f1e39", 00:16:36.675 "assigned_rate_limits": { 00:16:36.675 "rw_ios_per_sec": 0, 00:16:36.675 "rw_mbytes_per_sec": 0, 00:16:36.675 "r_mbytes_per_sec": 0, 00:16:36.675 "w_mbytes_per_sec": 0 00:16:36.675 }, 00:16:36.675 "claimed": false, 00:16:36.675 "zoned": false, 00:16:36.675 "supported_io_types": { 00:16:36.675 "read": true, 00:16:36.675 "write": true, 00:16:36.675 "unmap": true, 00:16:36.675 "write_zeroes": true, 00:16:36.675 "flush": true, 00:16:36.675 "reset": true, 00:16:36.675 "compare": false, 00:16:36.675 "compare_and_write": false, 00:16:36.675 "abort": true, 00:16:36.675 "nvme_admin": false, 00:16:36.675 "nvme_io": false 00:16:36.675 }, 00:16:36.675 "memory_domains": [ 00:16:36.675 { 00:16:36.675 "dma_device_id": "system", 00:16:36.675 "dma_device_type": 1 00:16:36.675 }, 00:16:36.675 { 00:16:36.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.675 "dma_device_type": 2 00:16:36.675 } 00:16:36.675 ], 00:16:36.675 "driver_specific": {} 00:16:36.675 } 00:16:36.675 ] 00:16:36.675 00:36:10 -- common/autotest_common.sh@893 -- # return 0 00:16:36.675 00:36:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:36.675 [2024-04-27 00:36:10.242212] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.675 [2024-04-27 00:36:10.244541] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.675 [2024-04-27 00:36:10.245248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.675 00:36:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:36.675 00:36:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:36.675 00:36:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:36.675 00:36:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.675 00:36:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.932 "name": "Existed_Raid", 00:16:36.932 "uuid": "3253eba6-49f3-473f-afac-8d60fe540dff", 00:16:36.932 "strip_size_kb": 0, 00:16:36.932 "state": "configuring", 00:16:36.932 "raid_level": "raid1", 00:16:36.932 "superblock": true, 00:16:36.932 "num_base_bdevs": 2, 00:16:36.932 "num_base_bdevs_discovered": 1, 00:16:36.932 "num_base_bdevs_operational": 2, 00:16:36.932 "base_bdevs_list": [ 00:16:36.932 { 00:16:36.932 "name": "BaseBdev1", 00:16:36.932 "uuid": "31cc3aa2-5693-4ee9-9c6b-63c4ff9f1e39", 00:16:36.932 "is_configured": true, 00:16:36.932 "data_offset": 2048, 00:16:36.932 "data_size": 63488 00:16:36.932 }, 00:16:36.932 { 00:16:36.932 "name": "BaseBdev2", 00:16:36.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.932 "is_configured": false, 00:16:36.932 "data_offset": 0, 00:16:36.932 "data_size": 0 00:16:36.932 } 00:16:36.932 ] 00:16:36.932 }' 00:16:36.932 00:36:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.932 00:36:10 -- common/autotest_common.sh@10 -- # set +x 00:16:37.890 00:36:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:38.148 [2024-04-27 00:36:11.482673] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.148 [2024-04-27 00:36:11.483172] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:38.148 [2024-04-27 00:36:11.483354] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:38.148 [2024-04-27 00:36:11.483589] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:38.148 BaseBdev2 00:16:38.148 [2024-04-27 00:36:11.484214] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:38.148 [2024-04-27 00:36:11.484355] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:16:38.148 [2024-04-27 00:36:11.484688] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.148 00:36:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:38.148 00:36:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:38.148 00:36:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:38.148 00:36:11 -- common/autotest_common.sh@887 -- # local i 00:16:38.148 00:36:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:38.148 00:36:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:38.148 00:36:11 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.407 00:36:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:38.407 [ 00:16:38.407 { 00:16:38.407 "name": "BaseBdev2", 00:16:38.407 "aliases": [ 00:16:38.407 "94a1a663-9935-4180-893c-5d3dabf65a9d" 00:16:38.407 ], 00:16:38.407 "product_name": "Malloc disk", 00:16:38.407 "block_size": 512, 00:16:38.407 "num_blocks": 65536, 00:16:38.407 "uuid": "94a1a663-9935-4180-893c-5d3dabf65a9d", 00:16:38.407 "assigned_rate_limits": { 00:16:38.407 "rw_ios_per_sec": 0, 00:16:38.407 "rw_mbytes_per_sec": 0, 00:16:38.407 "r_mbytes_per_sec": 0, 00:16:38.407 "w_mbytes_per_sec": 0 00:16:38.407 }, 00:16:38.407 "claimed": true, 00:16:38.407 "claim_type": "exclusive_write", 00:16:38.407 "zoned": false, 00:16:38.407 "supported_io_types": { 00:16:38.407 "read": true, 00:16:38.407 "write": true, 00:16:38.407 "unmap": true, 00:16:38.407 "write_zeroes": true, 00:16:38.407 "flush": true, 00:16:38.407 "reset": true, 00:16:38.407 "compare": false, 00:16:38.407 "compare_and_write": false, 00:16:38.407 "abort": true, 00:16:38.407 "nvme_admin": false, 00:16:38.407 "nvme_io": false 00:16:38.407 }, 00:16:38.407 "memory_domains": [ 00:16:38.407 { 00:16:38.407 "dma_device_id": "system", 00:16:38.407 "dma_device_type": 1 00:16:38.407 }, 00:16:38.407 { 00:16:38.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.407 "dma_device_type": 2 00:16:38.407 } 00:16:38.407 ], 00:16:38.407 "driver_specific": {} 00:16:38.407 } 00:16:38.407 ] 00:16:38.407 00:36:11 -- common/autotest_common.sh@893 -- # return 0 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.407 00:36:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.666 00:36:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.666 "name": "Existed_Raid", 00:16:38.666 "uuid": "3253eba6-49f3-473f-afac-8d60fe540dff", 00:16:38.666 "strip_size_kb": 0, 00:16:38.666 "state": "online", 00:16:38.666 "raid_level": "raid1", 00:16:38.666 "superblock": true, 00:16:38.666 "num_base_bdevs": 2, 00:16:38.666 "num_base_bdevs_discovered": 2, 00:16:38.666 "num_base_bdevs_operational": 2, 00:16:38.666 "base_bdevs_list": [ 00:16:38.666 { 00:16:38.666 "name": "BaseBdev1", 00:16:38.666 "uuid": "31cc3aa2-5693-4ee9-9c6b-63c4ff9f1e39", 00:16:38.666 "is_configured": true, 00:16:38.666 "data_offset": 2048, 00:16:38.666 "data_size": 63488 00:16:38.666 }, 00:16:38.666 { 00:16:38.666 "name": "BaseBdev2", 00:16:38.666 "uuid": "94a1a663-9935-4180-893c-5d3dabf65a9d", 00:16:38.666 "is_configured": true, 00:16:38.666 "data_offset": 2048, 00:16:38.666 "data_size": 63488 00:16:38.666 } 00:16:38.666 ] 00:16:38.666 }' 00:16:38.666 00:36:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.666 00:36:12 -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 00:36:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:39.599 [2024-04-27 00:36:13.023252] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.599 00:36:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:39.599 00:36:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:39.599 00:36:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:39.599 00:36:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.600 00:36:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.857 00:36:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.857 "name": "Existed_Raid", 00:16:39.857 "uuid": "3253eba6-49f3-473f-afac-8d60fe540dff", 00:16:39.857 "strip_size_kb": 0, 00:16:39.857 "state": "online", 00:16:39.857 "raid_level": "raid1", 00:16:39.857 "superblock": true, 00:16:39.857 "num_base_bdevs": 2, 00:16:39.857 "num_base_bdevs_discovered": 1, 00:16:39.857 "num_base_bdevs_operational": 1, 00:16:39.857 "base_bdevs_list": [ 00:16:39.857 { 00:16:39.857 "name": null, 00:16:39.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.857 "is_configured": false, 00:16:39.857 "data_offset": 2048, 00:16:39.857 "data_size": 63488 00:16:39.857 }, 00:16:39.857 { 00:16:39.857 "name": "BaseBdev2", 00:16:39.857 "uuid": "94a1a663-9935-4180-893c-5d3dabf65a9d", 00:16:39.857 "is_configured": true, 00:16:39.857 "data_offset": 2048, 00:16:39.857 "data_size": 63488 00:16:39.857 } 00:16:39.857 ] 00:16:39.857 }' 00:16:39.857 00:36:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.857 00:36:13 -- common/autotest_common.sh@10 -- # set +x 00:16:40.423 00:36:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:40.423 00:36:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.423 00:36:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.423 00:36:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:40.681 00:36:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:40.681 00:36:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.681 00:36:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:40.939 [2024-04-27 00:36:14.475091] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.939 [2024-04-27 00:36:14.475388] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.196 [2024-04-27 00:36:14.543195] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.196 [2024-04-27 00:36:14.543526] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.196 [2024-04-27 00:36:14.543677] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:16:41.196 00:36:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:41.196 00:36:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:41.196 00:36:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.196 00:36:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.454 00:36:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:41.454 00:36:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:41.454 00:36:14 -- bdev/bdev_raid.sh@287 -- # killprocess 121677 00:16:41.454 00:36:14 -- common/autotest_common.sh@936 -- # '[' -z 121677 ']' 00:16:41.454 00:36:14 -- common/autotest_common.sh@940 -- # kill -0 121677 00:16:41.454 00:36:14 -- common/autotest_common.sh@941 -- # uname 00:16:41.454 00:36:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.454 00:36:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 121677 00:16:41.454 killing process with pid 121677 00:16:41.454 00:36:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:41.454 00:36:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:41.454 00:36:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 121677' 00:16:41.454 00:36:14 -- common/autotest_common.sh@955 -- # kill 121677 00:16:41.454 [2024-04-27 00:36:14.848284] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.454 00:36:14 -- common/autotest_common.sh@960 -- # wait 121677 00:16:41.454 [2024-04-27 00:36:14.848402] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.388 ************************************ 00:16:42.388 END TEST raid_state_function_test_sb 00:16:42.388 ************************************ 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:42.388 00:16:42.388 real 0m11.059s 00:16:42.388 user 0m19.366s 00:16:42.388 sys 0m1.260s 00:16:42.388 00:36:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:42.388 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:42.388 00:36:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:42.388 00:36:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.388 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.388 ************************************ 00:16:42.388 START TEST raid_superblock_test 00:16:42.388 ************************************ 00:16:42.388 00:36:15 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 2 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=122017 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:42.388 00:36:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122017 /var/tmp/spdk-raid.sock 00:16:42.388 00:36:15 -- common/autotest_common.sh@817 -- # '[' -z 122017 ']' 00:16:42.388 00:36:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:42.388 00:36:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:42.388 00:36:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:42.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:42.389 00:36:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:42.389 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.646 [2024-04-27 00:36:15.978391] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:42.646 [2024-04-27 00:36:15.978900] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122017 ] 00:16:42.646 [2024-04-27 00:36:16.147652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.905 [2024-04-27 00:36:16.380619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.163 [2024-04-27 00:36:16.575810] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.421 00:36:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:43.421 00:36:16 -- common/autotest_common.sh@850 -- # return 0 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.421 00:36:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:43.679 malloc1 00:16:43.679 00:36:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.955 [2024-04-27 00:36:17.399163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.955 [2024-04-27 00:36:17.399888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.955 [2024-04-27 00:36:17.400192] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:43.955 [2024-04-27 00:36:17.400484] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.955 [2024-04-27 00:36:17.403581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.955 [2024-04-27 00:36:17.403862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.955 pt1 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.955 00:36:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:44.235 malloc2 00:16:44.235 00:36:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.493 [2024-04-27 00:36:17.926409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.493 [2024-04-27 00:36:17.926937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.494 [2024-04-27 00:36:17.927264] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:44.494 [2024-04-27 00:36:17.927638] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.494 [2024-04-27 00:36:17.930552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.494 [2024-04-27 00:36:17.930866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.494 pt2 00:16:44.494 00:36:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:44.494 00:36:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:44.494 00:36:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:44.753 [2024-04-27 00:36:18.143403] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.753 [2024-04-27 00:36:18.145723] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.753 [2024-04-27 00:36:18.146115] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:16:44.753 [2024-04-27 00:36:18.146286] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:44.753 [2024-04-27 00:36:18.146466] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:44.753 [2024-04-27 00:36:18.147072] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:16:44.753 [2024-04-27 00:36:18.147246] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:16:44.753 [2024-04-27 00:36:18.147550] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.753 00:36:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.011 00:36:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.011 "name": "raid_bdev1", 00:16:45.011 "uuid": "5b827b7e-af9b-4185-b3bd-149bcef44963", 00:16:45.011 "strip_size_kb": 0, 00:16:45.011 "state": "online", 00:16:45.011 "raid_level": "raid1", 00:16:45.011 "superblock": true, 00:16:45.011 "num_base_bdevs": 2, 00:16:45.011 "num_base_bdevs_discovered": 2, 00:16:45.012 "num_base_bdevs_operational": 2, 00:16:45.012 "base_bdevs_list": [ 00:16:45.012 { 00:16:45.012 "name": "pt1", 00:16:45.012 "uuid": "a5efefcc-31fc-52d2-9598-ad6c5eec067a", 00:16:45.012 "is_configured": true, 00:16:45.012 "data_offset": 2048, 00:16:45.012 "data_size": 63488 00:16:45.012 }, 00:16:45.012 { 00:16:45.012 "name": "pt2", 00:16:45.012 "uuid": "5c044af1-742a-5996-8fee-e0b2155396b4", 00:16:45.012 "is_configured": true, 00:16:45.012 "data_offset": 2048, 00:16:45.012 "data_size": 63488 00:16:45.012 } 00:16:45.012 ] 00:16:45.012 }' 00:16:45.012 00:36:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.012 00:36:18 -- common/autotest_common.sh@10 -- # set +x 00:16:45.579 00:36:18 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:45.579 00:36:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:45.838 [2024-04-27 00:36:19.180101] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.838 00:36:19 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5b827b7e-af9b-4185-b3bd-149bcef44963 00:16:45.838 00:36:19 -- bdev/bdev_raid.sh@380 -- # '[' -z 5b827b7e-af9b-4185-b3bd-149bcef44963 ']' 00:16:45.838 00:36:19 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:45.838 [2024-04-27 00:36:19.387877] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:45.838 [2024-04-27 00:36:19.388046] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.838 [2024-04-27 00:36:19.388218] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.838 [2024-04-27 00:36:19.388397] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.838 [2024-04-27 00:36:19.388505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:16:45.838 00:36:19 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.838 00:36:19 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:46.097 00:36:19 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:46.097 00:36:19 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:46.097 00:36:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.097 00:36:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:46.355 00:36:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.355 00:36:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:46.614 00:36:20 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:46.614 00:36:20 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:46.872 00:36:20 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:46.872 00:36:20 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:46.872 00:36:20 -- common/autotest_common.sh@638 -- # local es=0 00:16:46.872 00:36:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:46.872 00:36:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.872 00:36:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.872 00:36:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.872 00:36:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.872 00:36:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.872 00:36:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:46.872 00:36:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.872 00:36:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:46.872 00:36:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:47.131 [2024-04-27 00:36:20.604190] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:47.131 [2024-04-27 00:36:20.606399] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:47.131 [2024-04-27 00:36:20.606637] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:47.131 [2024-04-27 00:36:20.607360] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:47.131 [2024-04-27 00:36:20.607682] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.131 [2024-04-27 00:36:20.607737] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:16:47.131 request: 00:16:47.131 { 00:16:47.131 "name": "raid_bdev1", 00:16:47.131 "raid_level": "raid1", 00:16:47.131 "base_bdevs": [ 00:16:47.131 "malloc1", 00:16:47.131 "malloc2" 00:16:47.131 ], 00:16:47.131 "superblock": false, 00:16:47.131 "method": "bdev_raid_create", 00:16:47.131 "req_id": 1 00:16:47.131 } 00:16:47.131 Got JSON-RPC error response 00:16:47.131 response: 00:16:47.131 { 00:16:47.131 "code": -17, 00:16:47.131 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:47.131 } 00:16:47.131 00:36:20 -- common/autotest_common.sh@641 -- # es=1 00:16:47.131 00:36:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:47.131 00:36:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:47.131 00:36:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:47.131 00:36:20 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.131 00:36:20 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:47.390 00:36:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:47.390 00:36:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:47.390 00:36:20 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.648 [2024-04-27 00:36:21.040272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.648 [2024-04-27 00:36:21.040709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.648 [2024-04-27 00:36:21.041019] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:47.648 [2024-04-27 00:36:21.041321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.648 [2024-04-27 00:36:21.044075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.648 [2024-04-27 00:36:21.044369] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.648 [2024-04-27 00:36:21.044700] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:47.648 [2024-04-27 00:36:21.044904] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.648 pt1 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.648 00:36:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.907 00:36:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.907 "name": "raid_bdev1", 00:16:47.907 "uuid": "5b827b7e-af9b-4185-b3bd-149bcef44963", 00:16:47.907 "strip_size_kb": 0, 00:16:47.907 "state": "configuring", 00:16:47.907 "raid_level": "raid1", 00:16:47.907 "superblock": true, 00:16:47.907 "num_base_bdevs": 2, 00:16:47.907 "num_base_bdevs_discovered": 1, 00:16:47.907 "num_base_bdevs_operational": 2, 00:16:47.907 "base_bdevs_list": [ 00:16:47.907 { 00:16:47.907 "name": "pt1", 00:16:47.907 "uuid": "a5efefcc-31fc-52d2-9598-ad6c5eec067a", 00:16:47.907 "is_configured": true, 00:16:47.907 "data_offset": 2048, 00:16:47.907 "data_size": 63488 00:16:47.907 }, 00:16:47.907 { 00:16:47.907 "name": null, 00:16:47.907 "uuid": "5c044af1-742a-5996-8fee-e0b2155396b4", 00:16:47.907 "is_configured": false, 00:16:47.907 "data_offset": 2048, 00:16:47.907 "data_size": 63488 00:16:47.907 } 00:16:47.907 ] 00:16:47.907 }' 00:16:47.907 00:36:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.907 00:36:21 -- common/autotest_common.sh@10 -- # set +x 00:16:48.473 00:36:21 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:48.473 00:36:21 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:48.473 00:36:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:48.473 00:36:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.473 [2024-04-27 00:36:22.005018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.473 [2024-04-27 00:36:22.005702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.473 [2024-04-27 00:36:22.005980] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:48.473 [2024-04-27 00:36:22.006217] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.473 [2024-04-27 00:36:22.007049] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.473 [2024-04-27 00:36:22.007325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.473 [2024-04-27 00:36:22.007677] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:48.473 [2024-04-27 00:36:22.007811] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.473 [2024-04-27 00:36:22.007977] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:16:48.473 [2024-04-27 00:36:22.008076] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:48.473 [2024-04-27 00:36:22.008226] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:48.473 [2024-04-27 00:36:22.008699] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:16:48.473 [2024-04-27 00:36:22.008813] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:16:48.473 [2024-04-27 00:36:22.009089] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.473 pt2 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.473 00:36:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.731 00:36:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.732 "name": "raid_bdev1", 00:16:48.732 "uuid": "5b827b7e-af9b-4185-b3bd-149bcef44963", 00:16:48.732 "strip_size_kb": 0, 00:16:48.732 "state": "online", 00:16:48.732 "raid_level": "raid1", 00:16:48.732 "superblock": true, 00:16:48.732 "num_base_bdevs": 2, 00:16:48.732 "num_base_bdevs_discovered": 2, 00:16:48.732 "num_base_bdevs_operational": 2, 00:16:48.732 "base_bdevs_list": [ 00:16:48.732 { 00:16:48.732 "name": "pt1", 00:16:48.732 "uuid": "a5efefcc-31fc-52d2-9598-ad6c5eec067a", 00:16:48.732 "is_configured": true, 00:16:48.732 "data_offset": 2048, 00:16:48.732 "data_size": 63488 00:16:48.732 }, 00:16:48.732 { 00:16:48.732 "name": "pt2", 00:16:48.732 "uuid": "5c044af1-742a-5996-8fee-e0b2155396b4", 00:16:48.732 "is_configured": true, 00:16:48.732 "data_offset": 2048, 00:16:48.732 "data_size": 63488 00:16:48.732 } 00:16:48.732 ] 00:16:48.732 }' 00:16:48.732 00:36:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.732 00:36:22 -- common/autotest_common.sh@10 -- # set +x 00:16:49.298 00:36:22 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:49.298 00:36:22 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:49.554 [2024-04-27 00:36:23.121541] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.554 00:36:23 -- bdev/bdev_raid.sh@430 -- # '[' 5b827b7e-af9b-4185-b3bd-149bcef44963 '!=' 5b827b7e-af9b-4185-b3bd-149bcef44963 ']' 00:16:49.554 00:36:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:49.554 00:36:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:49.554 00:36:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:49.554 00:36:23 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:49.811 [2024-04-27 00:36:23.341379] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.811 00:36:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.068 00:36:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.068 "name": "raid_bdev1", 00:16:50.068 "uuid": "5b827b7e-af9b-4185-b3bd-149bcef44963", 00:16:50.068 "strip_size_kb": 0, 00:16:50.068 "state": "online", 00:16:50.068 "raid_level": "raid1", 00:16:50.068 "superblock": true, 00:16:50.068 "num_base_bdevs": 2, 00:16:50.068 "num_base_bdevs_discovered": 1, 00:16:50.068 "num_base_bdevs_operational": 1, 00:16:50.068 "base_bdevs_list": [ 00:16:50.068 { 00:16:50.068 "name": null, 00:16:50.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.068 "is_configured": false, 00:16:50.068 "data_offset": 2048, 00:16:50.068 "data_size": 63488 00:16:50.068 }, 00:16:50.068 { 00:16:50.068 "name": "pt2", 00:16:50.068 "uuid": "5c044af1-742a-5996-8fee-e0b2155396b4", 00:16:50.068 "is_configured": true, 00:16:50.068 "data_offset": 2048, 00:16:50.068 "data_size": 63488 00:16:50.068 } 00:16:50.068 ] 00:16:50.068 }' 00:16:50.068 00:36:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.068 00:36:23 -- common/autotest_common.sh@10 -- # set +x 00:16:51.000 00:36:24 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:51.000 [2024-04-27 00:36:24.473558] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.000 [2024-04-27 00:36:24.473748] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.000 [2024-04-27 00:36:24.473953] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.000 [2024-04-27 00:36:24.474137] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.000 [2024-04-27 00:36:24.474282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:16:51.000 00:36:24 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.000 00:36:24 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:51.258 00:36:24 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:51.259 00:36:24 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:51.259 00:36:24 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:51.259 00:36:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:51.259 00:36:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:51.567 00:36:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:51.567 00:36:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:51.567 00:36:24 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:51.567 00:36:24 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:51.567 00:36:24 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:51.567 00:36:24 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.832 [2024-04-27 00:36:25.189726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.832 [2024-04-27 00:36:25.190405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.832 [2024-04-27 00:36:25.190749] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:51.832 [2024-04-27 00:36:25.191031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.832 [2024-04-27 00:36:25.193471] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.832 [2024-04-27 00:36:25.193728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.832 [2024-04-27 00:36:25.194049] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:51.832 [2024-04-27 00:36:25.194217] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.832 [2024-04-27 00:36:25.194580] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:16:51.832 [2024-04-27 00:36:25.194706] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.832 [2024-04-27 00:36:25.194882] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:51.832 pt2 00:16:51.832 [2024-04-27 00:36:25.195366] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:16:51.832 [2024-04-27 00:36:25.195497] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:16:51.832 [2024-04-27 00:36:25.195724] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.832 00:36:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.089 00:36:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.089 "name": "raid_bdev1", 00:16:52.090 "uuid": "5b827b7e-af9b-4185-b3bd-149bcef44963", 00:16:52.090 "strip_size_kb": 0, 00:16:52.090 "state": "online", 00:16:52.090 "raid_level": "raid1", 00:16:52.090 "superblock": true, 00:16:52.090 "num_base_bdevs": 2, 00:16:52.090 "num_base_bdevs_discovered": 1, 00:16:52.090 "num_base_bdevs_operational": 1, 00:16:52.090 "base_bdevs_list": [ 00:16:52.090 { 00:16:52.090 "name": null, 00:16:52.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.090 "is_configured": false, 00:16:52.090 "data_offset": 2048, 00:16:52.090 "data_size": 63488 00:16:52.090 }, 00:16:52.090 { 00:16:52.090 "name": "pt2", 00:16:52.090 "uuid": "5c044af1-742a-5996-8fee-e0b2155396b4", 00:16:52.090 "is_configured": true, 00:16:52.090 "data_offset": 2048, 00:16:52.090 "data_size": 63488 00:16:52.090 } 00:16:52.090 ] 00:16:52.090 }' 00:16:52.090 00:36:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.090 00:36:25 -- common/autotest_common.sh@10 -- # set +x 00:16:52.656 00:36:26 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:52.657 00:36:26 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:52.657 00:36:26 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:52.657 [2024-04-27 00:36:26.198537] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.657 00:36:26 -- bdev/bdev_raid.sh@506 -- # '[' 5b827b7e-af9b-4185-b3bd-149bcef44963 '!=' 5b827b7e-af9b-4185-b3bd-149bcef44963 ']' 00:16:52.657 00:36:26 -- bdev/bdev_raid.sh@511 -- # killprocess 122017 00:16:52.657 00:36:26 -- common/autotest_common.sh@936 -- # '[' -z 122017 ']' 00:16:52.657 00:36:26 -- common/autotest_common.sh@940 -- # kill -0 122017 00:16:52.657 00:36:26 -- common/autotest_common.sh@941 -- # uname 00:16:52.657 00:36:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:52.657 00:36:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122017 00:16:52.657 killing process with pid 122017 00:16:52.657 00:36:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:52.657 00:36:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:52.657 00:36:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122017' 00:16:52.657 00:36:26 -- common/autotest_common.sh@955 -- # kill 122017 00:16:52.657 [2024-04-27 00:36:26.236036] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.657 00:36:26 -- common/autotest_common.sh@960 -- # wait 122017 00:16:52.657 [2024-04-27 00:36:26.236110] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.657 [2024-04-27 00:36:26.236159] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.657 [2024-04-27 00:36:26.236170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:16:52.915 [2024-04-27 00:36:26.369045] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.849 ************************************ 00:16:53.849 END TEST raid_superblock_test 00:16:53.849 ************************************ 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:53.849 00:16:53.849 real 0m11.419s 00:16:53.849 user 0m20.296s 00:16:53.849 sys 0m1.308s 00:16:53.849 00:36:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:53.849 00:36:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:53.849 00:36:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:53.849 00:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:53.849 00:36:27 -- common/autotest_common.sh@10 -- # set +x 00:16:53.849 ************************************ 00:16:53.849 START TEST raid_state_function_test 00:16:53.849 ************************************ 00:16:53.849 00:36:27 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 false 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=122372 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122372' 00:16:53.849 Process raid pid: 122372 00:16:53.849 00:36:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122372 /var/tmp/spdk-raid.sock 00:16:53.849 00:36:27 -- common/autotest_common.sh@817 -- # '[' -z 122372 ']' 00:16:53.849 00:36:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.108 00:36:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:54.108 00:36:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.108 00:36:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:54.108 00:36:27 -- common/autotest_common.sh@10 -- # set +x 00:16:54.108 [2024-04-27 00:36:27.518227] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:16:54.108 [2024-04-27 00:36:27.518863] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.366 [2024-04-27 00:36:27.711627] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.366 [2024-04-27 00:36:27.897207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.624 [2024-04-27 00:36:28.085784] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.191 00:36:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:55.191 00:36:28 -- common/autotest_common.sh@850 -- # return 0 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:55.191 [2024-04-27 00:36:28.692687] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.191 [2024-04-27 00:36:28.693334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.191 [2024-04-27 00:36:28.693507] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.191 [2024-04-27 00:36:28.693672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.191 [2024-04-27 00:36:28.693821] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.191 [2024-04-27 00:36:28.694004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.191 00:36:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.448 00:36:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.448 "name": "Existed_Raid", 00:16:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.448 "strip_size_kb": 64, 00:16:55.448 "state": "configuring", 00:16:55.448 "raid_level": "raid0", 00:16:55.448 "superblock": false, 00:16:55.448 "num_base_bdevs": 3, 00:16:55.448 "num_base_bdevs_discovered": 0, 00:16:55.448 "num_base_bdevs_operational": 3, 00:16:55.448 "base_bdevs_list": [ 00:16:55.448 { 00:16:55.448 "name": "BaseBdev1", 00:16:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.448 "is_configured": false, 00:16:55.448 "data_offset": 0, 00:16:55.448 "data_size": 0 00:16:55.448 }, 00:16:55.448 { 00:16:55.448 "name": "BaseBdev2", 00:16:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.448 "is_configured": false, 00:16:55.448 "data_offset": 0, 00:16:55.448 "data_size": 0 00:16:55.448 }, 00:16:55.448 { 00:16:55.448 "name": "BaseBdev3", 00:16:55.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.448 "is_configured": false, 00:16:55.448 "data_offset": 0, 00:16:55.448 "data_size": 0 00:16:55.448 } 00:16:55.448 ] 00:16:55.448 }' 00:16:55.448 00:36:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.448 00:36:28 -- common/autotest_common.sh@10 -- # set +x 00:16:56.015 00:36:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.274 [2024-04-27 00:36:29.780823] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.274 [2024-04-27 00:36:29.781020] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:16:56.274 00:36:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:56.531 [2024-04-27 00:36:30.041019] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.531 [2024-04-27 00:36:30.041822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.531 [2024-04-27 00:36:30.042017] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.531 [2024-04-27 00:36:30.042202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.531 [2024-04-27 00:36:30.042343] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.531 [2024-04-27 00:36:30.042573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.531 00:36:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.789 [2024-04-27 00:36:30.305765] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.789 BaseBdev1 00:16:56.789 00:36:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:56.789 00:36:30 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:56.789 00:36:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:56.789 00:36:30 -- common/autotest_common.sh@887 -- # local i 00:16:56.789 00:36:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:56.789 00:36:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:56.789 00:36:30 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.047 00:36:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:57.304 [ 00:16:57.304 { 00:16:57.304 "name": "BaseBdev1", 00:16:57.304 "aliases": [ 00:16:57.304 "6a5498ba-4f32-44e5-b005-342d7f700eb0" 00:16:57.304 ], 00:16:57.304 "product_name": "Malloc disk", 00:16:57.304 "block_size": 512, 00:16:57.304 "num_blocks": 65536, 00:16:57.304 "uuid": "6a5498ba-4f32-44e5-b005-342d7f700eb0", 00:16:57.304 "assigned_rate_limits": { 00:16:57.304 "rw_ios_per_sec": 0, 00:16:57.304 "rw_mbytes_per_sec": 0, 00:16:57.304 "r_mbytes_per_sec": 0, 00:16:57.304 "w_mbytes_per_sec": 0 00:16:57.304 }, 00:16:57.304 "claimed": true, 00:16:57.304 "claim_type": "exclusive_write", 00:16:57.304 "zoned": false, 00:16:57.304 "supported_io_types": { 00:16:57.304 "read": true, 00:16:57.304 "write": true, 00:16:57.304 "unmap": true, 00:16:57.304 "write_zeroes": true, 00:16:57.304 "flush": true, 00:16:57.304 "reset": true, 00:16:57.304 "compare": false, 00:16:57.304 "compare_and_write": false, 00:16:57.304 "abort": true, 00:16:57.304 "nvme_admin": false, 00:16:57.304 "nvme_io": false 00:16:57.304 }, 00:16:57.304 "memory_domains": [ 00:16:57.304 { 00:16:57.304 "dma_device_id": "system", 00:16:57.304 "dma_device_type": 1 00:16:57.304 }, 00:16:57.304 { 00:16:57.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.304 "dma_device_type": 2 00:16:57.304 } 00:16:57.304 ], 00:16:57.304 "driver_specific": {} 00:16:57.304 } 00:16:57.304 ] 00:16:57.304 00:36:30 -- common/autotest_common.sh@893 -- # return 0 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.304 00:36:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.562 00:36:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.562 "name": "Existed_Raid", 00:16:57.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.562 "strip_size_kb": 64, 00:16:57.562 "state": "configuring", 00:16:57.562 "raid_level": "raid0", 00:16:57.562 "superblock": false, 00:16:57.562 "num_base_bdevs": 3, 00:16:57.562 "num_base_bdevs_discovered": 1, 00:16:57.562 "num_base_bdevs_operational": 3, 00:16:57.562 "base_bdevs_list": [ 00:16:57.562 { 00:16:57.562 "name": "BaseBdev1", 00:16:57.562 "uuid": "6a5498ba-4f32-44e5-b005-342d7f700eb0", 00:16:57.562 "is_configured": true, 00:16:57.562 "data_offset": 0, 00:16:57.562 "data_size": 65536 00:16:57.562 }, 00:16:57.562 { 00:16:57.562 "name": "BaseBdev2", 00:16:57.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.562 "is_configured": false, 00:16:57.562 "data_offset": 0, 00:16:57.562 "data_size": 0 00:16:57.562 }, 00:16:57.562 { 00:16:57.562 "name": "BaseBdev3", 00:16:57.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.562 "is_configured": false, 00:16:57.562 "data_offset": 0, 00:16:57.562 "data_size": 0 00:16:57.562 } 00:16:57.562 ] 00:16:57.562 }' 00:16:57.562 00:36:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.562 00:36:30 -- common/autotest_common.sh@10 -- # set +x 00:16:58.128 00:36:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:58.386 [2024-04-27 00:36:31.870153] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.386 [2024-04-27 00:36:31.870395] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:16:58.386 00:36:31 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:58.386 00:36:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:58.680 [2024-04-27 00:36:32.086244] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:58.680 [2024-04-27 00:36:32.088481] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:58.680 [2024-04-27 00:36:32.089032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:58.680 [2024-04-27 00:36:32.089182] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:58.680 [2024-04-27 00:36:32.089345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.680 00:36:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.938 00:36:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.938 "name": "Existed_Raid", 00:16:58.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.938 "strip_size_kb": 64, 00:16:58.938 "state": "configuring", 00:16:58.938 "raid_level": "raid0", 00:16:58.938 "superblock": false, 00:16:58.938 "num_base_bdevs": 3, 00:16:58.938 "num_base_bdevs_discovered": 1, 00:16:58.938 "num_base_bdevs_operational": 3, 00:16:58.938 "base_bdevs_list": [ 00:16:58.938 { 00:16:58.938 "name": "BaseBdev1", 00:16:58.938 "uuid": "6a5498ba-4f32-44e5-b005-342d7f700eb0", 00:16:58.938 "is_configured": true, 00:16:58.938 "data_offset": 0, 00:16:58.938 "data_size": 65536 00:16:58.938 }, 00:16:58.938 { 00:16:58.938 "name": "BaseBdev2", 00:16:58.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.938 "is_configured": false, 00:16:58.938 "data_offset": 0, 00:16:58.938 "data_size": 0 00:16:58.938 }, 00:16:58.938 { 00:16:58.938 "name": "BaseBdev3", 00:16:58.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.938 "is_configured": false, 00:16:58.938 "data_offset": 0, 00:16:58.938 "data_size": 0 00:16:58.938 } 00:16:58.938 ] 00:16:58.938 }' 00:16:58.938 00:36:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.938 00:36:32 -- common/autotest_common.sh@10 -- # set +x 00:16:59.506 00:36:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.764 [2024-04-27 00:36:33.299671] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.764 BaseBdev2 00:16:59.764 00:36:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:59.764 00:36:33 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:59.764 00:36:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:59.764 00:36:33 -- common/autotest_common.sh@887 -- # local i 00:16:59.764 00:36:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:59.764 00:36:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:59.764 00:36:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.022 00:36:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.279 [ 00:17:00.279 { 00:17:00.279 "name": "BaseBdev2", 00:17:00.279 "aliases": [ 00:17:00.279 "28bae87e-1dc9-4b63-8681-d350daa3fc2d" 00:17:00.279 ], 00:17:00.279 "product_name": "Malloc disk", 00:17:00.279 "block_size": 512, 00:17:00.279 "num_blocks": 65536, 00:17:00.279 "uuid": "28bae87e-1dc9-4b63-8681-d350daa3fc2d", 00:17:00.279 "assigned_rate_limits": { 00:17:00.279 "rw_ios_per_sec": 0, 00:17:00.279 "rw_mbytes_per_sec": 0, 00:17:00.279 "r_mbytes_per_sec": 0, 00:17:00.279 "w_mbytes_per_sec": 0 00:17:00.279 }, 00:17:00.279 "claimed": true, 00:17:00.279 "claim_type": "exclusive_write", 00:17:00.279 "zoned": false, 00:17:00.279 "supported_io_types": { 00:17:00.279 "read": true, 00:17:00.279 "write": true, 00:17:00.279 "unmap": true, 00:17:00.279 "write_zeroes": true, 00:17:00.279 "flush": true, 00:17:00.279 "reset": true, 00:17:00.279 "compare": false, 00:17:00.279 "compare_and_write": false, 00:17:00.279 "abort": true, 00:17:00.279 "nvme_admin": false, 00:17:00.279 "nvme_io": false 00:17:00.279 }, 00:17:00.279 "memory_domains": [ 00:17:00.279 { 00:17:00.279 "dma_device_id": "system", 00:17:00.279 "dma_device_type": 1 00:17:00.279 }, 00:17:00.279 { 00:17:00.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.279 "dma_device_type": 2 00:17:00.279 } 00:17:00.279 ], 00:17:00.279 "driver_specific": {} 00:17:00.279 } 00:17:00.279 ] 00:17:00.279 00:36:33 -- common/autotest_common.sh@893 -- # return 0 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.279 00:36:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.280 00:36:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.280 00:36:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.280 00:36:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.280 00:36:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.280 00:36:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.537 00:36:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.537 "name": "Existed_Raid", 00:17:00.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.537 "strip_size_kb": 64, 00:17:00.537 "state": "configuring", 00:17:00.537 "raid_level": "raid0", 00:17:00.537 "superblock": false, 00:17:00.537 "num_base_bdevs": 3, 00:17:00.537 "num_base_bdevs_discovered": 2, 00:17:00.537 "num_base_bdevs_operational": 3, 00:17:00.537 "base_bdevs_list": [ 00:17:00.537 { 00:17:00.537 "name": "BaseBdev1", 00:17:00.537 "uuid": "6a5498ba-4f32-44e5-b005-342d7f700eb0", 00:17:00.537 "is_configured": true, 00:17:00.537 "data_offset": 0, 00:17:00.537 "data_size": 65536 00:17:00.537 }, 00:17:00.537 { 00:17:00.537 "name": "BaseBdev2", 00:17:00.537 "uuid": "28bae87e-1dc9-4b63-8681-d350daa3fc2d", 00:17:00.537 "is_configured": true, 00:17:00.537 "data_offset": 0, 00:17:00.537 "data_size": 65536 00:17:00.537 }, 00:17:00.537 { 00:17:00.537 "name": "BaseBdev3", 00:17:00.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.537 "is_configured": false, 00:17:00.537 "data_offset": 0, 00:17:00.537 "data_size": 0 00:17:00.537 } 00:17:00.537 ] 00:17:00.537 }' 00:17:00.537 00:36:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.537 00:36:34 -- common/autotest_common.sh@10 -- # set +x 00:17:01.472 00:36:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:01.472 [2024-04-27 00:36:35.020208] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.472 [2024-04-27 00:36:35.020435] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:01.472 [2024-04-27 00:36:35.020496] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:01.472 [2024-04-27 00:36:35.020727] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:01.472 [2024-04-27 00:36:35.021196] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:01.472 [2024-04-27 00:36:35.021348] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:01.472 [2024-04-27 00:36:35.021749] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.472 BaseBdev3 00:17:01.472 00:36:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:01.472 00:36:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:17:01.472 00:36:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:01.472 00:36:35 -- common/autotest_common.sh@887 -- # local i 00:17:01.472 00:36:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:01.472 00:36:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:01.472 00:36:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:01.730 00:36:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:01.987 [ 00:17:01.987 { 00:17:01.988 "name": "BaseBdev3", 00:17:01.988 "aliases": [ 00:17:01.988 "8f961fd9-66eb-443d-a41c-aa1c5ce623d5" 00:17:01.988 ], 00:17:01.988 "product_name": "Malloc disk", 00:17:01.988 "block_size": 512, 00:17:01.988 "num_blocks": 65536, 00:17:01.988 "uuid": "8f961fd9-66eb-443d-a41c-aa1c5ce623d5", 00:17:01.988 "assigned_rate_limits": { 00:17:01.988 "rw_ios_per_sec": 0, 00:17:01.988 "rw_mbytes_per_sec": 0, 00:17:01.988 "r_mbytes_per_sec": 0, 00:17:01.988 "w_mbytes_per_sec": 0 00:17:01.988 }, 00:17:01.988 "claimed": true, 00:17:01.988 "claim_type": "exclusive_write", 00:17:01.988 "zoned": false, 00:17:01.988 "supported_io_types": { 00:17:01.988 "read": true, 00:17:01.988 "write": true, 00:17:01.988 "unmap": true, 00:17:01.988 "write_zeroes": true, 00:17:01.988 "flush": true, 00:17:01.988 "reset": true, 00:17:01.988 "compare": false, 00:17:01.988 "compare_and_write": false, 00:17:01.988 "abort": true, 00:17:01.988 "nvme_admin": false, 00:17:01.988 "nvme_io": false 00:17:01.988 }, 00:17:01.988 "memory_domains": [ 00:17:01.988 { 00:17:01.988 "dma_device_id": "system", 00:17:01.988 "dma_device_type": 1 00:17:01.988 }, 00:17:01.988 { 00:17:01.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.988 "dma_device_type": 2 00:17:01.988 } 00:17:01.988 ], 00:17:01.988 "driver_specific": {} 00:17:01.988 } 00:17:01.988 ] 00:17:01.988 00:36:35 -- common/autotest_common.sh@893 -- # return 0 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.988 00:36:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.246 00:36:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.246 "name": "Existed_Raid", 00:17:02.246 "uuid": "9853879a-56b7-4c44-949f-06bf95ec9490", 00:17:02.246 "strip_size_kb": 64, 00:17:02.246 "state": "online", 00:17:02.246 "raid_level": "raid0", 00:17:02.246 "superblock": false, 00:17:02.246 "num_base_bdevs": 3, 00:17:02.246 "num_base_bdevs_discovered": 3, 00:17:02.246 "num_base_bdevs_operational": 3, 00:17:02.246 "base_bdevs_list": [ 00:17:02.246 { 00:17:02.246 "name": "BaseBdev1", 00:17:02.246 "uuid": "6a5498ba-4f32-44e5-b005-342d7f700eb0", 00:17:02.246 "is_configured": true, 00:17:02.246 "data_offset": 0, 00:17:02.246 "data_size": 65536 00:17:02.246 }, 00:17:02.246 { 00:17:02.246 "name": "BaseBdev2", 00:17:02.246 "uuid": "28bae87e-1dc9-4b63-8681-d350daa3fc2d", 00:17:02.246 "is_configured": true, 00:17:02.246 "data_offset": 0, 00:17:02.246 "data_size": 65536 00:17:02.246 }, 00:17:02.246 { 00:17:02.246 "name": "BaseBdev3", 00:17:02.246 "uuid": "8f961fd9-66eb-443d-a41c-aa1c5ce623d5", 00:17:02.246 "is_configured": true, 00:17:02.246 "data_offset": 0, 00:17:02.246 "data_size": 65536 00:17:02.246 } 00:17:02.246 ] 00:17:02.246 }' 00:17:02.246 00:36:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.246 00:36:35 -- common/autotest_common.sh@10 -- # set +x 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:03.193 [2024-04-27 00:36:36.656652] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.193 [2024-04-27 00:36:36.656866] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.193 [2024-04-27 00:36:36.657033] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.193 00:36:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.451 00:36:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:03.451 "name": "Existed_Raid", 00:17:03.451 "uuid": "9853879a-56b7-4c44-949f-06bf95ec9490", 00:17:03.451 "strip_size_kb": 64, 00:17:03.451 "state": "offline", 00:17:03.451 "raid_level": "raid0", 00:17:03.451 "superblock": false, 00:17:03.451 "num_base_bdevs": 3, 00:17:03.451 "num_base_bdevs_discovered": 2, 00:17:03.451 "num_base_bdevs_operational": 2, 00:17:03.451 "base_bdevs_list": [ 00:17:03.451 { 00:17:03.451 "name": null, 00:17:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.451 "is_configured": false, 00:17:03.451 "data_offset": 0, 00:17:03.451 "data_size": 65536 00:17:03.451 }, 00:17:03.451 { 00:17:03.451 "name": "BaseBdev2", 00:17:03.451 "uuid": "28bae87e-1dc9-4b63-8681-d350daa3fc2d", 00:17:03.451 "is_configured": true, 00:17:03.451 "data_offset": 0, 00:17:03.451 "data_size": 65536 00:17:03.451 }, 00:17:03.451 { 00:17:03.451 "name": "BaseBdev3", 00:17:03.451 "uuid": "8f961fd9-66eb-443d-a41c-aa1c5ce623d5", 00:17:03.451 "is_configured": true, 00:17:03.451 "data_offset": 0, 00:17:03.451 "data_size": 65536 00:17:03.451 } 00:17:03.451 ] 00:17:03.451 }' 00:17:03.451 00:36:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:03.451 00:36:36 -- common/autotest_common.sh@10 -- # set +x 00:17:04.017 00:36:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:04.017 00:36:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:04.017 00:36:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.017 00:36:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:04.274 00:36:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:04.274 00:36:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.274 00:36:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:04.533 [2024-04-27 00:36:38.011296] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.533 00:36:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:04.533 00:36:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:04.533 00:36:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:04.533 00:36:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.791 00:36:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:04.791 00:36:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.791 00:36:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:05.048 [2024-04-27 00:36:38.603160] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.048 [2024-04-27 00:36:38.603406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:05.306 00:36:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:05.306 00:36:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:05.306 00:36:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.306 00:36:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.564 00:36:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:05.564 00:36:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:05.564 00:36:38 -- bdev/bdev_raid.sh@287 -- # killprocess 122372 00:17:05.564 00:36:38 -- common/autotest_common.sh@936 -- # '[' -z 122372 ']' 00:17:05.564 00:36:38 -- common/autotest_common.sh@940 -- # kill -0 122372 00:17:05.564 00:36:38 -- common/autotest_common.sh@941 -- # uname 00:17:05.564 00:36:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.564 00:36:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122372 00:17:05.564 00:36:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:05.564 00:36:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:05.564 00:36:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122372' 00:17:05.564 killing process with pid 122372 00:17:05.564 00:36:38 -- common/autotest_common.sh@955 -- # kill 122372 00:17:05.564 [2024-04-27 00:36:38.937213] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.564 00:36:38 -- common/autotest_common.sh@960 -- # wait 122372 00:17:05.564 [2024-04-27 00:36:38.937455] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.606 ************************************ 00:17:06.606 END TEST raid_state_function_test 00:17:06.606 ************************************ 00:17:06.606 00:36:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:06.606 00:17:06.606 real 0m12.529s 00:17:06.606 user 0m22.039s 00:17:06.606 sys 0m1.559s 00:17:06.606 00:36:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:06.606 00:36:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.606 00:36:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:06.606 00:36:39 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:06.606 00:36:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.606 00:36:39 -- common/autotest_common.sh@10 -- # set +x 00:17:06.606 ************************************ 00:17:06.606 START TEST raid_state_function_test_sb 00:17:06.606 ************************************ 00:17:06.606 00:36:40 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 3 true 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=122758 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122758' 00:17:06.606 Process raid pid: 122758 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:06.606 00:36:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122758 /var/tmp/spdk-raid.sock 00:17:06.606 00:36:40 -- common/autotest_common.sh@817 -- # '[' -z 122758 ']' 00:17:06.606 00:36:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:06.606 00:36:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.606 00:36:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:06.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:06.606 00:36:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.606 00:36:40 -- common/autotest_common.sh@10 -- # set +x 00:17:06.606 [2024-04-27 00:36:40.120798] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:06.606 [2024-04-27 00:36:40.121161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.864 [2024-04-27 00:36:40.290476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.166 [2024-04-27 00:36:40.478006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.166 [2024-04-27 00:36:40.668246] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.729 00:36:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:07.729 00:36:41 -- common/autotest_common.sh@850 -- # return 0 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:07.729 [2024-04-27 00:36:41.273526] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.729 [2024-04-27 00:36:41.273774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.729 [2024-04-27 00:36:41.273891] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.729 [2024-04-27 00:36:41.273950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.729 [2024-04-27 00:36:41.274040] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.729 [2024-04-27 00:36:41.274185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.729 00:36:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.986 00:36:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.986 "name": "Existed_Raid", 00:17:07.986 "uuid": "ef40bf91-8587-408f-86f6-761fa35940b8", 00:17:07.986 "strip_size_kb": 64, 00:17:07.986 "state": "configuring", 00:17:07.986 "raid_level": "raid0", 00:17:07.986 "superblock": true, 00:17:07.986 "num_base_bdevs": 3, 00:17:07.986 "num_base_bdevs_discovered": 0, 00:17:07.986 "num_base_bdevs_operational": 3, 00:17:07.986 "base_bdevs_list": [ 00:17:07.986 { 00:17:07.986 "name": "BaseBdev1", 00:17:07.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.986 "is_configured": false, 00:17:07.986 "data_offset": 0, 00:17:07.987 "data_size": 0 00:17:07.987 }, 00:17:07.987 { 00:17:07.987 "name": "BaseBdev2", 00:17:07.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.987 "is_configured": false, 00:17:07.987 "data_offset": 0, 00:17:07.987 "data_size": 0 00:17:07.987 }, 00:17:07.987 { 00:17:07.987 "name": "BaseBdev3", 00:17:07.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.987 "is_configured": false, 00:17:07.987 "data_offset": 0, 00:17:07.987 "data_size": 0 00:17:07.987 } 00:17:07.987 ] 00:17:07.987 }' 00:17:07.987 00:36:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.987 00:36:41 -- common/autotest_common.sh@10 -- # set +x 00:17:08.551 00:36:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.809 [2024-04-27 00:36:42.317622] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.809 [2024-04-27 00:36:42.317845] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:17:08.809 00:36:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:09.067 [2024-04-27 00:36:42.517673] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:09.068 [2024-04-27 00:36:42.517926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:09.068 [2024-04-27 00:36:42.518034] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.068 [2024-04-27 00:36:42.518095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.068 [2024-04-27 00:36:42.518184] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.068 [2024-04-27 00:36:42.518313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.068 00:36:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:09.325 [2024-04-27 00:36:42.764629] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.325 BaseBdev1 00:17:09.325 00:36:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:09.325 00:36:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:09.325 00:36:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:09.325 00:36:42 -- common/autotest_common.sh@887 -- # local i 00:17:09.325 00:36:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:09.326 00:36:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:09.326 00:36:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.587 00:36:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.845 [ 00:17:09.845 { 00:17:09.845 "name": "BaseBdev1", 00:17:09.845 "aliases": [ 00:17:09.845 "a17ea048-963c-4ede-bd88-d8dcae3ed692" 00:17:09.845 ], 00:17:09.845 "product_name": "Malloc disk", 00:17:09.845 "block_size": 512, 00:17:09.845 "num_blocks": 65536, 00:17:09.845 "uuid": "a17ea048-963c-4ede-bd88-d8dcae3ed692", 00:17:09.845 "assigned_rate_limits": { 00:17:09.845 "rw_ios_per_sec": 0, 00:17:09.845 "rw_mbytes_per_sec": 0, 00:17:09.845 "r_mbytes_per_sec": 0, 00:17:09.845 "w_mbytes_per_sec": 0 00:17:09.845 }, 00:17:09.845 "claimed": true, 00:17:09.845 "claim_type": "exclusive_write", 00:17:09.845 "zoned": false, 00:17:09.845 "supported_io_types": { 00:17:09.845 "read": true, 00:17:09.845 "write": true, 00:17:09.845 "unmap": true, 00:17:09.845 "write_zeroes": true, 00:17:09.845 "flush": true, 00:17:09.845 "reset": true, 00:17:09.845 "compare": false, 00:17:09.845 "compare_and_write": false, 00:17:09.845 "abort": true, 00:17:09.845 "nvme_admin": false, 00:17:09.845 "nvme_io": false 00:17:09.845 }, 00:17:09.845 "memory_domains": [ 00:17:09.845 { 00:17:09.845 "dma_device_id": "system", 00:17:09.845 "dma_device_type": 1 00:17:09.845 }, 00:17:09.845 { 00:17:09.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.845 "dma_device_type": 2 00:17:09.845 } 00:17:09.845 ], 00:17:09.845 "driver_specific": {} 00:17:09.845 } 00:17:09.845 ] 00:17:09.845 00:36:43 -- common/autotest_common.sh@893 -- # return 0 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.845 00:36:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.103 00:36:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.103 "name": "Existed_Raid", 00:17:10.103 "uuid": "b9737ded-8cac-4bba-a29b-3f14da87fc73", 00:17:10.103 "strip_size_kb": 64, 00:17:10.103 "state": "configuring", 00:17:10.103 "raid_level": "raid0", 00:17:10.103 "superblock": true, 00:17:10.103 "num_base_bdevs": 3, 00:17:10.103 "num_base_bdevs_discovered": 1, 00:17:10.103 "num_base_bdevs_operational": 3, 00:17:10.103 "base_bdevs_list": [ 00:17:10.103 { 00:17:10.103 "name": "BaseBdev1", 00:17:10.103 "uuid": "a17ea048-963c-4ede-bd88-d8dcae3ed692", 00:17:10.103 "is_configured": true, 00:17:10.103 "data_offset": 2048, 00:17:10.103 "data_size": 63488 00:17:10.103 }, 00:17:10.103 { 00:17:10.103 "name": "BaseBdev2", 00:17:10.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.103 "is_configured": false, 00:17:10.103 "data_offset": 0, 00:17:10.103 "data_size": 0 00:17:10.103 }, 00:17:10.103 { 00:17:10.103 "name": "BaseBdev3", 00:17:10.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.103 "is_configured": false, 00:17:10.103 "data_offset": 0, 00:17:10.103 "data_size": 0 00:17:10.103 } 00:17:10.103 ] 00:17:10.103 }' 00:17:10.103 00:36:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.103 00:36:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.668 00:36:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:10.926 [2024-04-27 00:36:44.373061] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:10.926 [2024-04-27 00:36:44.373278] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:17:10.926 00:36:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:10.926 00:36:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:11.184 00:36:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.442 BaseBdev1 00:17:11.442 00:36:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:11.442 00:36:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:11.442 00:36:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:11.442 00:36:44 -- common/autotest_common.sh@887 -- # local i 00:17:11.442 00:36:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:11.442 00:36:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:11.442 00:36:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.699 00:36:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.957 [ 00:17:11.957 { 00:17:11.957 "name": "BaseBdev1", 00:17:11.957 "aliases": [ 00:17:11.957 "94849efb-6553-4f06-9417-263cd38688bf" 00:17:11.957 ], 00:17:11.957 "product_name": "Malloc disk", 00:17:11.957 "block_size": 512, 00:17:11.957 "num_blocks": 65536, 00:17:11.957 "uuid": "94849efb-6553-4f06-9417-263cd38688bf", 00:17:11.957 "assigned_rate_limits": { 00:17:11.957 "rw_ios_per_sec": 0, 00:17:11.957 "rw_mbytes_per_sec": 0, 00:17:11.957 "r_mbytes_per_sec": 0, 00:17:11.957 "w_mbytes_per_sec": 0 00:17:11.957 }, 00:17:11.957 "claimed": false, 00:17:11.957 "zoned": false, 00:17:11.957 "supported_io_types": { 00:17:11.957 "read": true, 00:17:11.957 "write": true, 00:17:11.957 "unmap": true, 00:17:11.957 "write_zeroes": true, 00:17:11.957 "flush": true, 00:17:11.957 "reset": true, 00:17:11.957 "compare": false, 00:17:11.957 "compare_and_write": false, 00:17:11.957 "abort": true, 00:17:11.957 "nvme_admin": false, 00:17:11.957 "nvme_io": false 00:17:11.957 }, 00:17:11.957 "memory_domains": [ 00:17:11.957 { 00:17:11.957 "dma_device_id": "system", 00:17:11.957 "dma_device_type": 1 00:17:11.957 }, 00:17:11.957 { 00:17:11.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.957 "dma_device_type": 2 00:17:11.957 } 00:17:11.957 ], 00:17:11.957 "driver_specific": {} 00:17:11.957 } 00:17:11.957 ] 00:17:11.957 00:36:45 -- common/autotest_common.sh@893 -- # return 0 00:17:11.957 00:36:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:12.215 [2024-04-27 00:36:45.608857] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.215 [2024-04-27 00:36:45.611000] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.215 [2024-04-27 00:36:45.611218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.215 [2024-04-27 00:36:45.611347] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:12.215 [2024-04-27 00:36:45.611424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.215 00:36:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.473 00:36:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.473 "name": "Existed_Raid", 00:17:12.473 "uuid": "2dd06095-cf39-47b9-8248-4efd97d42a98", 00:17:12.473 "strip_size_kb": 64, 00:17:12.473 "state": "configuring", 00:17:12.473 "raid_level": "raid0", 00:17:12.473 "superblock": true, 00:17:12.473 "num_base_bdevs": 3, 00:17:12.473 "num_base_bdevs_discovered": 1, 00:17:12.473 "num_base_bdevs_operational": 3, 00:17:12.473 "base_bdevs_list": [ 00:17:12.473 { 00:17:12.473 "name": "BaseBdev1", 00:17:12.473 "uuid": "94849efb-6553-4f06-9417-263cd38688bf", 00:17:12.473 "is_configured": true, 00:17:12.473 "data_offset": 2048, 00:17:12.473 "data_size": 63488 00:17:12.473 }, 00:17:12.473 { 00:17:12.473 "name": "BaseBdev2", 00:17:12.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.473 "is_configured": false, 00:17:12.473 "data_offset": 0, 00:17:12.473 "data_size": 0 00:17:12.473 }, 00:17:12.473 { 00:17:12.473 "name": "BaseBdev3", 00:17:12.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.473 "is_configured": false, 00:17:12.473 "data_offset": 0, 00:17:12.473 "data_size": 0 00:17:12.473 } 00:17:12.473 ] 00:17:12.473 }' 00:17:12.473 00:36:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.473 00:36:45 -- common/autotest_common.sh@10 -- # set +x 00:17:13.048 00:36:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:13.322 [2024-04-27 00:36:46.791893] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.322 BaseBdev2 00:17:13.322 00:36:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:13.322 00:36:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:13.322 00:36:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:13.322 00:36:46 -- common/autotest_common.sh@887 -- # local i 00:17:13.322 00:36:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:13.322 00:36:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:13.322 00:36:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.580 00:36:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:13.838 [ 00:17:13.838 { 00:17:13.838 "name": "BaseBdev2", 00:17:13.838 "aliases": [ 00:17:13.838 "cddaca50-52b3-4aa0-ab5c-564c92699e3b" 00:17:13.838 ], 00:17:13.838 "product_name": "Malloc disk", 00:17:13.838 "block_size": 512, 00:17:13.838 "num_blocks": 65536, 00:17:13.838 "uuid": "cddaca50-52b3-4aa0-ab5c-564c92699e3b", 00:17:13.838 "assigned_rate_limits": { 00:17:13.838 "rw_ios_per_sec": 0, 00:17:13.838 "rw_mbytes_per_sec": 0, 00:17:13.838 "r_mbytes_per_sec": 0, 00:17:13.838 "w_mbytes_per_sec": 0 00:17:13.838 }, 00:17:13.838 "claimed": true, 00:17:13.838 "claim_type": "exclusive_write", 00:17:13.838 "zoned": false, 00:17:13.838 "supported_io_types": { 00:17:13.838 "read": true, 00:17:13.838 "write": true, 00:17:13.838 "unmap": true, 00:17:13.838 "write_zeroes": true, 00:17:13.838 "flush": true, 00:17:13.838 "reset": true, 00:17:13.838 "compare": false, 00:17:13.838 "compare_and_write": false, 00:17:13.838 "abort": true, 00:17:13.838 "nvme_admin": false, 00:17:13.838 "nvme_io": false 00:17:13.838 }, 00:17:13.838 "memory_domains": [ 00:17:13.838 { 00:17:13.838 "dma_device_id": "system", 00:17:13.838 "dma_device_type": 1 00:17:13.838 }, 00:17:13.838 { 00:17:13.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.838 "dma_device_type": 2 00:17:13.838 } 00:17:13.838 ], 00:17:13.838 "driver_specific": {} 00:17:13.838 } 00:17:13.838 ] 00:17:13.838 00:36:47 -- common/autotest_common.sh@893 -- # return 0 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.838 00:36:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.097 00:36:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.097 "name": "Existed_Raid", 00:17:14.097 "uuid": "2dd06095-cf39-47b9-8248-4efd97d42a98", 00:17:14.097 "strip_size_kb": 64, 00:17:14.097 "state": "configuring", 00:17:14.097 "raid_level": "raid0", 00:17:14.097 "superblock": true, 00:17:14.097 "num_base_bdevs": 3, 00:17:14.097 "num_base_bdevs_discovered": 2, 00:17:14.097 "num_base_bdevs_operational": 3, 00:17:14.097 "base_bdevs_list": [ 00:17:14.097 { 00:17:14.097 "name": "BaseBdev1", 00:17:14.097 "uuid": "94849efb-6553-4f06-9417-263cd38688bf", 00:17:14.097 "is_configured": true, 00:17:14.097 "data_offset": 2048, 00:17:14.097 "data_size": 63488 00:17:14.097 }, 00:17:14.097 { 00:17:14.097 "name": "BaseBdev2", 00:17:14.097 "uuid": "cddaca50-52b3-4aa0-ab5c-564c92699e3b", 00:17:14.097 "is_configured": true, 00:17:14.097 "data_offset": 2048, 00:17:14.097 "data_size": 63488 00:17:14.097 }, 00:17:14.097 { 00:17:14.097 "name": "BaseBdev3", 00:17:14.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.097 "is_configured": false, 00:17:14.097 "data_offset": 0, 00:17:14.097 "data_size": 0 00:17:14.097 } 00:17:14.097 ] 00:17:14.097 }' 00:17:14.097 00:36:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.097 00:36:47 -- common/autotest_common.sh@10 -- # set +x 00:17:14.663 00:36:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:14.921 [2024-04-27 00:36:48.479849] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:14.921 [2024-04-27 00:36:48.480357] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:14.922 [2024-04-27 00:36:48.480525] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:14.922 [2024-04-27 00:36:48.480707] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:14.922 [2024-04-27 00:36:48.481106] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:14.922 BaseBdev3 00:17:14.922 [2024-04-27 00:36:48.481297] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:14.922 [2024-04-27 00:36:48.481577] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.922 00:36:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:14.922 00:36:48 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:17:14.922 00:36:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:14.922 00:36:48 -- common/autotest_common.sh@887 -- # local i 00:17:14.922 00:36:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:14.922 00:36:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:14.922 00:36:48 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.488 00:36:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:15.488 [ 00:17:15.488 { 00:17:15.488 "name": "BaseBdev3", 00:17:15.488 "aliases": [ 00:17:15.488 "9df5f2c7-11ad-4903-a59d-14deeecddf77" 00:17:15.488 ], 00:17:15.488 "product_name": "Malloc disk", 00:17:15.488 "block_size": 512, 00:17:15.488 "num_blocks": 65536, 00:17:15.488 "uuid": "9df5f2c7-11ad-4903-a59d-14deeecddf77", 00:17:15.488 "assigned_rate_limits": { 00:17:15.488 "rw_ios_per_sec": 0, 00:17:15.488 "rw_mbytes_per_sec": 0, 00:17:15.488 "r_mbytes_per_sec": 0, 00:17:15.488 "w_mbytes_per_sec": 0 00:17:15.488 }, 00:17:15.488 "claimed": true, 00:17:15.488 "claim_type": "exclusive_write", 00:17:15.488 "zoned": false, 00:17:15.488 "supported_io_types": { 00:17:15.488 "read": true, 00:17:15.488 "write": true, 00:17:15.488 "unmap": true, 00:17:15.488 "write_zeroes": true, 00:17:15.488 "flush": true, 00:17:15.488 "reset": true, 00:17:15.488 "compare": false, 00:17:15.488 "compare_and_write": false, 00:17:15.488 "abort": true, 00:17:15.488 "nvme_admin": false, 00:17:15.488 "nvme_io": false 00:17:15.488 }, 00:17:15.488 "memory_domains": [ 00:17:15.488 { 00:17:15.488 "dma_device_id": "system", 00:17:15.488 "dma_device_type": 1 00:17:15.488 }, 00:17:15.488 { 00:17:15.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.488 "dma_device_type": 2 00:17:15.488 } 00:17:15.488 ], 00:17:15.488 "driver_specific": {} 00:17:15.488 } 00:17:15.488 ] 00:17:15.488 00:36:48 -- common/autotest_common.sh@893 -- # return 0 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.488 00:36:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.747 00:36:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.747 "name": "Existed_Raid", 00:17:15.747 "uuid": "2dd06095-cf39-47b9-8248-4efd97d42a98", 00:17:15.747 "strip_size_kb": 64, 00:17:15.747 "state": "online", 00:17:15.747 "raid_level": "raid0", 00:17:15.747 "superblock": true, 00:17:15.747 "num_base_bdevs": 3, 00:17:15.747 "num_base_bdevs_discovered": 3, 00:17:15.747 "num_base_bdevs_operational": 3, 00:17:15.747 "base_bdevs_list": [ 00:17:15.747 { 00:17:15.747 "name": "BaseBdev1", 00:17:15.747 "uuid": "94849efb-6553-4f06-9417-263cd38688bf", 00:17:15.747 "is_configured": true, 00:17:15.747 "data_offset": 2048, 00:17:15.747 "data_size": 63488 00:17:15.747 }, 00:17:15.747 { 00:17:15.747 "name": "BaseBdev2", 00:17:15.747 "uuid": "cddaca50-52b3-4aa0-ab5c-564c92699e3b", 00:17:15.747 "is_configured": true, 00:17:15.747 "data_offset": 2048, 00:17:15.747 "data_size": 63488 00:17:15.747 }, 00:17:15.747 { 00:17:15.747 "name": "BaseBdev3", 00:17:15.747 "uuid": "9df5f2c7-11ad-4903-a59d-14deeecddf77", 00:17:15.747 "is_configured": true, 00:17:15.747 "data_offset": 2048, 00:17:15.747 "data_size": 63488 00:17:15.747 } 00:17:15.747 ] 00:17:15.747 }' 00:17:15.747 00:36:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.747 00:36:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.314 00:36:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:16.572 [2024-04-27 00:36:50.040445] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.572 [2024-04-27 00:36:50.040701] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.572 [2024-04-27 00:36:50.040936] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.572 00:36:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.139 00:36:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.139 "name": "Existed_Raid", 00:17:17.139 "uuid": "2dd06095-cf39-47b9-8248-4efd97d42a98", 00:17:17.139 "strip_size_kb": 64, 00:17:17.139 "state": "offline", 00:17:17.139 "raid_level": "raid0", 00:17:17.139 "superblock": true, 00:17:17.139 "num_base_bdevs": 3, 00:17:17.139 "num_base_bdevs_discovered": 2, 00:17:17.139 "num_base_bdevs_operational": 2, 00:17:17.139 "base_bdevs_list": [ 00:17:17.139 { 00:17:17.139 "name": null, 00:17:17.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.139 "is_configured": false, 00:17:17.139 "data_offset": 2048, 00:17:17.139 "data_size": 63488 00:17:17.139 }, 00:17:17.139 { 00:17:17.139 "name": "BaseBdev2", 00:17:17.139 "uuid": "cddaca50-52b3-4aa0-ab5c-564c92699e3b", 00:17:17.139 "is_configured": true, 00:17:17.139 "data_offset": 2048, 00:17:17.139 "data_size": 63488 00:17:17.139 }, 00:17:17.139 { 00:17:17.139 "name": "BaseBdev3", 00:17:17.139 "uuid": "9df5f2c7-11ad-4903-a59d-14deeecddf77", 00:17:17.139 "is_configured": true, 00:17:17.139 "data_offset": 2048, 00:17:17.139 "data_size": 63488 00:17:17.139 } 00:17:17.139 ] 00:17:17.139 }' 00:17:17.139 00:36:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.139 00:36:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.706 00:36:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:17.706 00:36:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:17.706 00:36:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:17.706 00:36:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.965 00:36:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:17.965 00:36:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:17.965 00:36:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:18.224 [2024-04-27 00:36:51.562842] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:18.224 00:36:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:18.224 00:36:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:18.224 00:36:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.224 00:36:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:18.483 00:36:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:18.483 00:36:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:18.483 00:36:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:18.742 [2024-04-27 00:36:52.088940] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:18.742 [2024-04-27 00:36:52.089439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:18.742 00:36:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:18.742 00:36:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:18.742 00:36:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.742 00:36:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:19.001 00:36:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:19.001 00:36:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:19.001 00:36:52 -- bdev/bdev_raid.sh@287 -- # killprocess 122758 00:17:19.001 00:36:52 -- common/autotest_common.sh@936 -- # '[' -z 122758 ']' 00:17:19.001 00:36:52 -- common/autotest_common.sh@940 -- # kill -0 122758 00:17:19.001 00:36:52 -- common/autotest_common.sh@941 -- # uname 00:17:19.001 00:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.001 00:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 122758 00:17:19.001 00:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:19.001 00:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:19.001 killing process with pid 122758 00:17:19.001 00:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 122758' 00:17:19.001 00:36:52 -- common/autotest_common.sh@955 -- # kill 122758 00:17:19.001 [2024-04-27 00:36:52.447186] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.001 00:36:52 -- common/autotest_common.sh@960 -- # wait 122758 00:17:19.001 [2024-04-27 00:36:52.447293] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.940 ************************************ 00:17:19.940 END TEST raid_state_function_test_sb 00:17:19.940 ************************************ 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:19.940 00:17:19.940 real 0m13.381s 00:17:19.940 user 0m23.686s 00:17:19.940 sys 0m1.497s 00:17:19.940 00:36:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:19.940 00:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:19.940 00:36:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:19.940 00:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:19.940 00:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:19.940 ************************************ 00:17:19.940 START TEST raid_superblock_test 00:17:19.940 ************************************ 00:17:19.940 00:36:53 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 3 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=123171 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123171 /var/tmp/spdk-raid.sock 00:17:19.940 00:36:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:19.940 00:36:53 -- common/autotest_common.sh@817 -- # '[' -z 123171 ']' 00:17:19.940 00:36:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:19.940 00:36:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:19.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:19.940 00:36:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:19.940 00:36:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:19.940 00:36:53 -- common/autotest_common.sh@10 -- # set +x 00:17:20.198 [2024-04-27 00:36:53.581767] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:20.198 [2024-04-27 00:36:53.581958] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123171 ] 00:17:20.198 [2024-04-27 00:36:53.752741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.457 [2024-04-27 00:36:53.986544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.716 [2024-04-27 00:36:54.155377] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.973 00:36:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:20.973 00:36:54 -- common/autotest_common.sh@850 -- # return 0 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.973 00:36:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:21.231 malloc1 00:17:21.231 00:36:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:21.489 [2024-04-27 00:36:55.031056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:21.489 [2024-04-27 00:36:55.031148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.489 [2024-04-27 00:36:55.031184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:21.489 [2024-04-27 00:36:55.031227] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.489 [2024-04-27 00:36:55.033463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.489 [2024-04-27 00:36:55.033525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:21.489 pt1 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:21.489 00:36:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:22.071 malloc2 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.071 [2024-04-27 00:36:55.541098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.071 [2024-04-27 00:36:55.541190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.071 [2024-04-27 00:36:55.541235] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:22.071 [2024-04-27 00:36:55.541287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.071 [2024-04-27 00:36:55.543710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.071 [2024-04-27 00:36:55.543773] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.071 pt2 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:22.071 00:36:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:22.329 malloc3 00:17:22.329 00:36:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:22.588 [2024-04-27 00:36:55.993108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:22.588 [2024-04-27 00:36:55.993224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.588 [2024-04-27 00:36:55.993272] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:22.588 [2024-04-27 00:36:55.993316] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.588 [2024-04-27 00:36:55.995738] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.588 [2024-04-27 00:36:55.995808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:22.588 pt3 00:17:22.588 00:36:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:22.588 00:36:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:22.588 00:36:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:22.847 [2024-04-27 00:36:56.245190] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.847 [2024-04-27 00:36:56.247236] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.847 [2024-04-27 00:36:56.247326] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:22.847 [2024-04-27 00:36:56.247560] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:17:22.847 [2024-04-27 00:36:56.247575] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:22.847 [2024-04-27 00:36:56.247713] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:22.847 [2024-04-27 00:36:56.248077] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:17:22.847 [2024-04-27 00:36:56.248102] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:17:22.847 [2024-04-27 00:36:56.248262] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.847 00:36:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.106 00:36:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.106 "name": "raid_bdev1", 00:17:23.106 "uuid": "460873d7-eb21-4d04-9b62-9303ab20832b", 00:17:23.106 "strip_size_kb": 64, 00:17:23.106 "state": "online", 00:17:23.106 "raid_level": "raid0", 00:17:23.106 "superblock": true, 00:17:23.106 "num_base_bdevs": 3, 00:17:23.106 "num_base_bdevs_discovered": 3, 00:17:23.106 "num_base_bdevs_operational": 3, 00:17:23.106 "base_bdevs_list": [ 00:17:23.106 { 00:17:23.106 "name": "pt1", 00:17:23.106 "uuid": "022438a4-91ea-5a42-b806-6b5d5cb3e44e", 00:17:23.106 "is_configured": true, 00:17:23.106 "data_offset": 2048, 00:17:23.106 "data_size": 63488 00:17:23.106 }, 00:17:23.106 { 00:17:23.106 "name": "pt2", 00:17:23.106 "uuid": "5d405dd4-13aa-5b53-91f7-9dc99c8a16a7", 00:17:23.106 "is_configured": true, 00:17:23.106 "data_offset": 2048, 00:17:23.106 "data_size": 63488 00:17:23.106 }, 00:17:23.106 { 00:17:23.106 "name": "pt3", 00:17:23.106 "uuid": "11c7bc0f-4915-5f25-8ffb-da0d712a3998", 00:17:23.106 "is_configured": true, 00:17:23.106 "data_offset": 2048, 00:17:23.106 "data_size": 63488 00:17:23.106 } 00:17:23.106 ] 00:17:23.106 }' 00:17:23.106 00:36:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.106 00:36:56 -- common/autotest_common.sh@10 -- # set +x 00:17:23.673 00:36:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:23.673 00:36:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:23.932 [2024-04-27 00:36:57.325578] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.932 00:36:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=460873d7-eb21-4d04-9b62-9303ab20832b 00:17:23.932 00:36:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 460873d7-eb21-4d04-9b62-9303ab20832b ']' 00:17:23.932 00:36:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:24.191 [2024-04-27 00:36:57.585400] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.191 [2024-04-27 00:36:57.585450] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.191 [2024-04-27 00:36:57.585531] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.191 [2024-04-27 00:36:57.585601] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.191 [2024-04-27 00:36:57.585612] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:17:24.191 00:36:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.191 00:36:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:24.449 00:36:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:24.449 00:36:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:24.449 00:36:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.449 00:36:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:24.708 00:36:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.708 00:36:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:24.967 00:36:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.967 00:36:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:25.226 00:36:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:25.226 00:36:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:25.492 00:36:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:25.493 00:36:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:25.493 00:36:58 -- common/autotest_common.sh@638 -- # local es=0 00:17:25.493 00:36:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:25.493 00:36:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.493 00:36:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:25.493 00:36:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.493 00:36:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:25.493 00:36:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.493 00:36:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:25.493 00:36:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.493 00:36:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:25.493 00:36:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:25.493 [2024-04-27 00:36:59.057754] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:25.493 [2024-04-27 00:36:59.060360] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:25.493 [2024-04-27 00:36:59.060438] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:25.493 [2024-04-27 00:36:59.060515] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:25.493 [2024-04-27 00:36:59.060659] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:25.493 [2024-04-27 00:36:59.060724] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:25.493 [2024-04-27 00:36:59.060802] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.493 [2024-04-27 00:36:59.060822] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:17:25.493 request: 00:17:25.493 { 00:17:25.493 "name": "raid_bdev1", 00:17:25.493 "raid_level": "raid0", 00:17:25.493 "base_bdevs": [ 00:17:25.493 "malloc1", 00:17:25.493 "malloc2", 00:17:25.493 "malloc3" 00:17:25.493 ], 00:17:25.493 "superblock": false, 00:17:25.493 "strip_size_kb": 64, 00:17:25.493 "method": "bdev_raid_create", 00:17:25.493 "req_id": 1 00:17:25.493 } 00:17:25.493 Got JSON-RPC error response 00:17:25.493 response: 00:17:25.493 { 00:17:25.493 "code": -17, 00:17:25.493 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:25.493 } 00:17:25.493 00:36:59 -- common/autotest_common.sh@641 -- # es=1 00:17:25.493 00:36:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:25.493 00:36:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:25.493 00:36:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:25.493 00:36:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.493 00:36:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:25.753 00:36:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:25.753 00:36:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:25.753 00:36:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:26.011 [2024-04-27 00:36:59.533751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:26.011 [2024-04-27 00:36:59.533868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.011 [2024-04-27 00:36:59.533925] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:26.011 [2024-04-27 00:36:59.533966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.011 [2024-04-27 00:36:59.536435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.011 [2024-04-27 00:36:59.536501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:26.011 [2024-04-27 00:36:59.536648] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:26.011 [2024-04-27 00:36:59.536696] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:26.011 pt1 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.011 00:36:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.270 00:36:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.270 "name": "raid_bdev1", 00:17:26.270 "uuid": "460873d7-eb21-4d04-9b62-9303ab20832b", 00:17:26.270 "strip_size_kb": 64, 00:17:26.270 "state": "configuring", 00:17:26.270 "raid_level": "raid0", 00:17:26.270 "superblock": true, 00:17:26.270 "num_base_bdevs": 3, 00:17:26.270 "num_base_bdevs_discovered": 1, 00:17:26.270 "num_base_bdevs_operational": 3, 00:17:26.270 "base_bdevs_list": [ 00:17:26.270 { 00:17:26.270 "name": "pt1", 00:17:26.270 "uuid": "022438a4-91ea-5a42-b806-6b5d5cb3e44e", 00:17:26.270 "is_configured": true, 00:17:26.270 "data_offset": 2048, 00:17:26.270 "data_size": 63488 00:17:26.270 }, 00:17:26.270 { 00:17:26.270 "name": null, 00:17:26.270 "uuid": "5d405dd4-13aa-5b53-91f7-9dc99c8a16a7", 00:17:26.270 "is_configured": false, 00:17:26.270 "data_offset": 2048, 00:17:26.270 "data_size": 63488 00:17:26.270 }, 00:17:26.270 { 00:17:26.270 "name": null, 00:17:26.270 "uuid": "11c7bc0f-4915-5f25-8ffb-da0d712a3998", 00:17:26.270 "is_configured": false, 00:17:26.270 "data_offset": 2048, 00:17:26.270 "data_size": 63488 00:17:26.270 } 00:17:26.270 ] 00:17:26.270 }' 00:17:26.270 00:36:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.270 00:36:59 -- common/autotest_common.sh@10 -- # set +x 00:17:26.836 00:37:00 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:26.836 00:37:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:27.403 [2024-04-27 00:37:00.706106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:27.403 [2024-04-27 00:37:00.706234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.403 [2024-04-27 00:37:00.706288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:27.403 [2024-04-27 00:37:00.706342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.403 [2024-04-27 00:37:00.706902] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.403 [2024-04-27 00:37:00.706946] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:27.403 [2024-04-27 00:37:00.707104] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:27.403 [2024-04-27 00:37:00.707132] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.403 pt2 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:27.403 [2024-04-27 00:37:00.918114] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.403 00:37:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.662 00:37:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.662 "name": "raid_bdev1", 00:17:27.662 "uuid": "460873d7-eb21-4d04-9b62-9303ab20832b", 00:17:27.662 "strip_size_kb": 64, 00:17:27.662 "state": "configuring", 00:17:27.662 "raid_level": "raid0", 00:17:27.662 "superblock": true, 00:17:27.662 "num_base_bdevs": 3, 00:17:27.662 "num_base_bdevs_discovered": 1, 00:17:27.662 "num_base_bdevs_operational": 3, 00:17:27.662 "base_bdevs_list": [ 00:17:27.662 { 00:17:27.662 "name": "pt1", 00:17:27.662 "uuid": "022438a4-91ea-5a42-b806-6b5d5cb3e44e", 00:17:27.662 "is_configured": true, 00:17:27.662 "data_offset": 2048, 00:17:27.662 "data_size": 63488 00:17:27.662 }, 00:17:27.662 { 00:17:27.662 "name": null, 00:17:27.662 "uuid": "5d405dd4-13aa-5b53-91f7-9dc99c8a16a7", 00:17:27.662 "is_configured": false, 00:17:27.662 "data_offset": 2048, 00:17:27.662 "data_size": 63488 00:17:27.662 }, 00:17:27.662 { 00:17:27.662 "name": null, 00:17:27.662 "uuid": "11c7bc0f-4915-5f25-8ffb-da0d712a3998", 00:17:27.662 "is_configured": false, 00:17:27.662 "data_offset": 2048, 00:17:27.662 "data_size": 63488 00:17:27.662 } 00:17:27.662 ] 00:17:27.662 }' 00:17:27.662 00:37:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.662 00:37:01 -- common/autotest_common.sh@10 -- # set +x 00:17:28.597 00:37:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:28.597 00:37:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:28.597 00:37:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.597 [2024-04-27 00:37:02.102441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.597 [2024-04-27 00:37:02.102569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.597 [2024-04-27 00:37:02.102611] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:28.597 [2024-04-27 00:37:02.102683] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.597 [2024-04-27 00:37:02.103262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.597 [2024-04-27 00:37:02.103313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.597 [2024-04-27 00:37:02.103439] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:28.597 [2024-04-27 00:37:02.103464] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.597 pt2 00:17:28.597 00:37:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:28.597 00:37:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:28.597 00:37:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:28.856 [2024-04-27 00:37:02.318474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:28.856 [2024-04-27 00:37:02.318580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.856 [2024-04-27 00:37:02.318618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:28.856 [2024-04-27 00:37:02.318687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.856 [2024-04-27 00:37:02.319209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.856 [2024-04-27 00:37:02.319258] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:28.856 [2024-04-27 00:37:02.319378] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:28.856 [2024-04-27 00:37:02.319404] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:28.856 [2024-04-27 00:37:02.319536] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:28.856 [2024-04-27 00:37:02.319550] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:28.856 [2024-04-27 00:37:02.319653] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:28.856 [2024-04-27 00:37:02.319993] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:28.856 [2024-04-27 00:37:02.320017] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:17:28.856 [2024-04-27 00:37:02.320156] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.856 pt3 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.856 00:37:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.114 00:37:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:29.114 "name": "raid_bdev1", 00:17:29.114 "uuid": "460873d7-eb21-4d04-9b62-9303ab20832b", 00:17:29.114 "strip_size_kb": 64, 00:17:29.114 "state": "online", 00:17:29.114 "raid_level": "raid0", 00:17:29.114 "superblock": true, 00:17:29.114 "num_base_bdevs": 3, 00:17:29.114 "num_base_bdevs_discovered": 3, 00:17:29.114 "num_base_bdevs_operational": 3, 00:17:29.114 "base_bdevs_list": [ 00:17:29.114 { 00:17:29.114 "name": "pt1", 00:17:29.114 "uuid": "022438a4-91ea-5a42-b806-6b5d5cb3e44e", 00:17:29.114 "is_configured": true, 00:17:29.114 "data_offset": 2048, 00:17:29.114 "data_size": 63488 00:17:29.114 }, 00:17:29.114 { 00:17:29.114 "name": "pt2", 00:17:29.114 "uuid": "5d405dd4-13aa-5b53-91f7-9dc99c8a16a7", 00:17:29.114 "is_configured": true, 00:17:29.114 "data_offset": 2048, 00:17:29.114 "data_size": 63488 00:17:29.114 }, 00:17:29.114 { 00:17:29.114 "name": "pt3", 00:17:29.114 "uuid": "11c7bc0f-4915-5f25-8ffb-da0d712a3998", 00:17:29.114 "is_configured": true, 00:17:29.114 "data_offset": 2048, 00:17:29.114 "data_size": 63488 00:17:29.114 } 00:17:29.115 ] 00:17:29.115 }' 00:17:29.115 00:37:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:29.115 00:37:02 -- common/autotest_common.sh@10 -- # set +x 00:17:29.681 00:37:03 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:29.681 00:37:03 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:29.940 [2024-04-27 00:37:03.391135] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.940 00:37:03 -- bdev/bdev_raid.sh@430 -- # '[' 460873d7-eb21-4d04-9b62-9303ab20832b '!=' 460873d7-eb21-4d04-9b62-9303ab20832b ']' 00:17:29.940 00:37:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:29.940 00:37:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:29.940 00:37:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:29.940 00:37:03 -- bdev/bdev_raid.sh@511 -- # killprocess 123171 00:17:29.940 00:37:03 -- common/autotest_common.sh@936 -- # '[' -z 123171 ']' 00:17:29.940 00:37:03 -- common/autotest_common.sh@940 -- # kill -0 123171 00:17:29.940 00:37:03 -- common/autotest_common.sh@941 -- # uname 00:17:29.940 00:37:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.940 00:37:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123171 00:17:29.940 00:37:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:29.940 00:37:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:29.940 killing process with pid 123171 00:17:29.940 00:37:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123171' 00:17:29.940 00:37:03 -- common/autotest_common.sh@955 -- # kill 123171 00:17:29.940 [2024-04-27 00:37:03.433509] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.940 [2024-04-27 00:37:03.433580] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.940 00:37:03 -- common/autotest_common.sh@960 -- # wait 123171 00:17:29.940 [2024-04-27 00:37:03.433643] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.940 [2024-04-27 00:37:03.433660] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:17:30.199 [2024-04-27 00:37:03.709214] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.135 00:37:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:31.135 00:17:31.135 real 0m11.170s 00:17:31.135 user 0m19.474s 00:17:31.135 sys 0m1.302s 00:17:31.135 00:37:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.135 ************************************ 00:17:31.135 END TEST raid_superblock_test 00:17:31.135 00:37:04 -- common/autotest_common.sh@10 -- # set +x 00:17:31.135 ************************************ 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:31.394 00:37:04 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:31.394 00:37:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.394 00:37:04 -- common/autotest_common.sh@10 -- # set +x 00:17:31.394 ************************************ 00:17:31.394 START TEST raid_state_function_test 00:17:31.394 ************************************ 00:17:31.394 00:37:04 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 false 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=123486 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123486' 00:17:31.394 Process raid pid: 123486 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:31.394 00:37:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123486 /var/tmp/spdk-raid.sock 00:17:31.394 00:37:04 -- common/autotest_common.sh@817 -- # '[' -z 123486 ']' 00:17:31.394 00:37:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:31.394 00:37:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:31.394 00:37:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:31.394 00:37:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.394 00:37:04 -- common/autotest_common.sh@10 -- # set +x 00:17:31.394 [2024-04-27 00:37:04.843066] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:31.394 [2024-04-27 00:37:04.843257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.652 [2024-04-27 00:37:05.015909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.911 [2024-04-27 00:37:05.260503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.911 [2024-04-27 00:37:05.436679] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.476 00:37:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:32.476 00:37:05 -- common/autotest_common.sh@850 -- # return 0 00:17:32.476 00:37:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:32.476 [2024-04-27 00:37:06.043591] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.476 [2024-04-27 00:37:06.043688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.476 [2024-04-27 00:37:06.043717] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.476 [2024-04-27 00:37:06.043735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.476 [2024-04-27 00:37:06.043743] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.476 [2024-04-27 00:37:06.043782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.476 00:37:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.733 00:37:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.733 00:37:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.733 00:37:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.733 "name": "Existed_Raid", 00:17:32.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.733 "strip_size_kb": 64, 00:17:32.733 "state": "configuring", 00:17:32.733 "raid_level": "concat", 00:17:32.733 "superblock": false, 00:17:32.733 "num_base_bdevs": 3, 00:17:32.733 "num_base_bdevs_discovered": 0, 00:17:32.733 "num_base_bdevs_operational": 3, 00:17:32.733 "base_bdevs_list": [ 00:17:32.733 { 00:17:32.733 "name": "BaseBdev1", 00:17:32.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.733 "is_configured": false, 00:17:32.733 "data_offset": 0, 00:17:32.733 "data_size": 0 00:17:32.733 }, 00:17:32.733 { 00:17:32.733 "name": "BaseBdev2", 00:17:32.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.733 "is_configured": false, 00:17:32.733 "data_offset": 0, 00:17:32.733 "data_size": 0 00:17:32.733 }, 00:17:32.733 { 00:17:32.733 "name": "BaseBdev3", 00:17:32.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.733 "is_configured": false, 00:17:32.733 "data_offset": 0, 00:17:32.733 "data_size": 0 00:17:32.733 } 00:17:32.733 ] 00:17:32.733 }' 00:17:32.991 00:37:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.991 00:37:06 -- common/autotest_common.sh@10 -- # set +x 00:17:33.558 00:37:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.558 [2024-04-27 00:37:07.127728] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.558 [2024-04-27 00:37:07.127788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:17:33.558 00:37:07 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:33.816 [2024-04-27 00:37:07.339802] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.816 [2024-04-27 00:37:07.339894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.816 [2024-04-27 00:37:07.339923] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.816 [2024-04-27 00:37:07.339942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.816 [2024-04-27 00:37:07.339950] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.816 [2024-04-27 00:37:07.339974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.816 00:37:07 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:34.074 [2024-04-27 00:37:07.593973] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:34.074 BaseBdev1 00:17:34.074 00:37:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:34.074 00:37:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:34.074 00:37:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:34.074 00:37:07 -- common/autotest_common.sh@887 -- # local i 00:17:34.074 00:37:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:34.074 00:37:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:34.074 00:37:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.333 00:37:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.591 [ 00:17:34.591 { 00:17:34.591 "name": "BaseBdev1", 00:17:34.591 "aliases": [ 00:17:34.591 "945062d4-c9cc-4b74-9bc9-a65857601a67" 00:17:34.591 ], 00:17:34.591 "product_name": "Malloc disk", 00:17:34.591 "block_size": 512, 00:17:34.591 "num_blocks": 65536, 00:17:34.591 "uuid": "945062d4-c9cc-4b74-9bc9-a65857601a67", 00:17:34.591 "assigned_rate_limits": { 00:17:34.591 "rw_ios_per_sec": 0, 00:17:34.591 "rw_mbytes_per_sec": 0, 00:17:34.591 "r_mbytes_per_sec": 0, 00:17:34.591 "w_mbytes_per_sec": 0 00:17:34.591 }, 00:17:34.591 "claimed": true, 00:17:34.591 "claim_type": "exclusive_write", 00:17:34.591 "zoned": false, 00:17:34.591 "supported_io_types": { 00:17:34.591 "read": true, 00:17:34.591 "write": true, 00:17:34.591 "unmap": true, 00:17:34.591 "write_zeroes": true, 00:17:34.591 "flush": true, 00:17:34.591 "reset": true, 00:17:34.591 "compare": false, 00:17:34.591 "compare_and_write": false, 00:17:34.591 "abort": true, 00:17:34.591 "nvme_admin": false, 00:17:34.591 "nvme_io": false 00:17:34.591 }, 00:17:34.591 "memory_domains": [ 00:17:34.591 { 00:17:34.591 "dma_device_id": "system", 00:17:34.591 "dma_device_type": 1 00:17:34.591 }, 00:17:34.591 { 00:17:34.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.591 "dma_device_type": 2 00:17:34.591 } 00:17:34.591 ], 00:17:34.591 "driver_specific": {} 00:17:34.591 } 00:17:34.591 ] 00:17:34.591 00:37:08 -- common/autotest_common.sh@893 -- # return 0 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.591 00:37:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.849 00:37:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:34.849 "name": "Existed_Raid", 00:17:34.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.850 "strip_size_kb": 64, 00:17:34.850 "state": "configuring", 00:17:34.850 "raid_level": "concat", 00:17:34.850 "superblock": false, 00:17:34.850 "num_base_bdevs": 3, 00:17:34.850 "num_base_bdevs_discovered": 1, 00:17:34.850 "num_base_bdevs_operational": 3, 00:17:34.850 "base_bdevs_list": [ 00:17:34.850 { 00:17:34.850 "name": "BaseBdev1", 00:17:34.850 "uuid": "945062d4-c9cc-4b74-9bc9-a65857601a67", 00:17:34.850 "is_configured": true, 00:17:34.850 "data_offset": 0, 00:17:34.850 "data_size": 65536 00:17:34.850 }, 00:17:34.850 { 00:17:34.850 "name": "BaseBdev2", 00:17:34.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.850 "is_configured": false, 00:17:34.850 "data_offset": 0, 00:17:34.850 "data_size": 0 00:17:34.850 }, 00:17:34.850 { 00:17:34.850 "name": "BaseBdev3", 00:17:34.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.850 "is_configured": false, 00:17:34.850 "data_offset": 0, 00:17:34.850 "data_size": 0 00:17:34.850 } 00:17:34.850 ] 00:17:34.850 }' 00:17:34.850 00:37:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:34.850 00:37:08 -- common/autotest_common.sh@10 -- # set +x 00:17:35.440 00:37:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:35.698 [2024-04-27 00:37:09.118464] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.698 [2024-04-27 00:37:09.118518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:17:35.698 00:37:09 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:35.699 00:37:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:35.957 [2024-04-27 00:37:09.370534] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.957 [2024-04-27 00:37:09.372502] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.957 [2024-04-27 00:37:09.372557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.957 [2024-04-27 00:37:09.372583] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.957 [2024-04-27 00:37:09.372607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.957 00:37:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.216 00:37:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.216 "name": "Existed_Raid", 00:17:36.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.216 "strip_size_kb": 64, 00:17:36.216 "state": "configuring", 00:17:36.216 "raid_level": "concat", 00:17:36.216 "superblock": false, 00:17:36.216 "num_base_bdevs": 3, 00:17:36.216 "num_base_bdevs_discovered": 1, 00:17:36.216 "num_base_bdevs_operational": 3, 00:17:36.216 "base_bdevs_list": [ 00:17:36.216 { 00:17:36.216 "name": "BaseBdev1", 00:17:36.216 "uuid": "945062d4-c9cc-4b74-9bc9-a65857601a67", 00:17:36.216 "is_configured": true, 00:17:36.216 "data_offset": 0, 00:17:36.216 "data_size": 65536 00:17:36.216 }, 00:17:36.216 { 00:17:36.216 "name": "BaseBdev2", 00:17:36.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.216 "is_configured": false, 00:17:36.216 "data_offset": 0, 00:17:36.216 "data_size": 0 00:17:36.216 }, 00:17:36.216 { 00:17:36.216 "name": "BaseBdev3", 00:17:36.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.216 "is_configured": false, 00:17:36.216 "data_offset": 0, 00:17:36.216 "data_size": 0 00:17:36.216 } 00:17:36.216 ] 00:17:36.216 }' 00:17:36.216 00:37:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.217 00:37:09 -- common/autotest_common.sh@10 -- # set +x 00:17:36.785 00:37:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:37.044 [2024-04-27 00:37:10.595598] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.044 BaseBdev2 00:17:37.044 00:37:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:37.044 00:37:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:37.044 00:37:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:37.044 00:37:10 -- common/autotest_common.sh@887 -- # local i 00:17:37.044 00:37:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:37.044 00:37:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:37.044 00:37:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:37.611 00:37:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:37.611 [ 00:17:37.611 { 00:17:37.611 "name": "BaseBdev2", 00:17:37.611 "aliases": [ 00:17:37.611 "2196f71a-b250-4b39-867d-f72a8061e35b" 00:17:37.611 ], 00:17:37.611 "product_name": "Malloc disk", 00:17:37.611 "block_size": 512, 00:17:37.611 "num_blocks": 65536, 00:17:37.611 "uuid": "2196f71a-b250-4b39-867d-f72a8061e35b", 00:17:37.611 "assigned_rate_limits": { 00:17:37.611 "rw_ios_per_sec": 0, 00:17:37.611 "rw_mbytes_per_sec": 0, 00:17:37.611 "r_mbytes_per_sec": 0, 00:17:37.611 "w_mbytes_per_sec": 0 00:17:37.611 }, 00:17:37.611 "claimed": true, 00:17:37.611 "claim_type": "exclusive_write", 00:17:37.611 "zoned": false, 00:17:37.611 "supported_io_types": { 00:17:37.611 "read": true, 00:17:37.611 "write": true, 00:17:37.611 "unmap": true, 00:17:37.611 "write_zeroes": true, 00:17:37.611 "flush": true, 00:17:37.611 "reset": true, 00:17:37.611 "compare": false, 00:17:37.611 "compare_and_write": false, 00:17:37.612 "abort": true, 00:17:37.612 "nvme_admin": false, 00:17:37.612 "nvme_io": false 00:17:37.612 }, 00:17:37.612 "memory_domains": [ 00:17:37.612 { 00:17:37.612 "dma_device_id": "system", 00:17:37.612 "dma_device_type": 1 00:17:37.612 }, 00:17:37.612 { 00:17:37.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.612 "dma_device_type": 2 00:17:37.612 } 00:17:37.612 ], 00:17:37.612 "driver_specific": {} 00:17:37.612 } 00:17:37.612 ] 00:17:37.612 00:37:11 -- common/autotest_common.sh@893 -- # return 0 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.612 00:37:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.870 00:37:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.871 "name": "Existed_Raid", 00:17:37.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.871 "strip_size_kb": 64, 00:17:37.871 "state": "configuring", 00:17:37.871 "raid_level": "concat", 00:17:37.871 "superblock": false, 00:17:37.871 "num_base_bdevs": 3, 00:17:37.871 "num_base_bdevs_discovered": 2, 00:17:37.871 "num_base_bdevs_operational": 3, 00:17:37.871 "base_bdevs_list": [ 00:17:37.871 { 00:17:37.871 "name": "BaseBdev1", 00:17:37.871 "uuid": "945062d4-c9cc-4b74-9bc9-a65857601a67", 00:17:37.871 "is_configured": true, 00:17:37.871 "data_offset": 0, 00:17:37.871 "data_size": 65536 00:17:37.871 }, 00:17:37.871 { 00:17:37.871 "name": "BaseBdev2", 00:17:37.871 "uuid": "2196f71a-b250-4b39-867d-f72a8061e35b", 00:17:37.871 "is_configured": true, 00:17:37.871 "data_offset": 0, 00:17:37.871 "data_size": 65536 00:17:37.871 }, 00:17:37.871 { 00:17:37.871 "name": "BaseBdev3", 00:17:37.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.871 "is_configured": false, 00:17:37.871 "data_offset": 0, 00:17:37.871 "data_size": 0 00:17:37.871 } 00:17:37.871 ] 00:17:37.871 }' 00:17:37.871 00:37:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.871 00:37:11 -- common/autotest_common.sh@10 -- # set +x 00:17:38.807 00:37:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:38.807 [2024-04-27 00:37:12.273645] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.807 [2024-04-27 00:37:12.273712] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:38.807 [2024-04-27 00:37:12.273722] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:38.807 [2024-04-27 00:37:12.273889] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:38.807 [2024-04-27 00:37:12.274337] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:38.807 [2024-04-27 00:37:12.274373] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:38.807 [2024-04-27 00:37:12.274636] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.807 BaseBdev3 00:17:38.807 00:37:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:38.807 00:37:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:17:38.807 00:37:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:38.807 00:37:12 -- common/autotest_common.sh@887 -- # local i 00:17:38.807 00:37:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:38.807 00:37:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:38.807 00:37:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.064 00:37:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:39.321 [ 00:17:39.322 { 00:17:39.322 "name": "BaseBdev3", 00:17:39.322 "aliases": [ 00:17:39.322 "3e194587-1c28-4a14-ab77-b0dd2fcb4e32" 00:17:39.322 ], 00:17:39.322 "product_name": "Malloc disk", 00:17:39.322 "block_size": 512, 00:17:39.322 "num_blocks": 65536, 00:17:39.322 "uuid": "3e194587-1c28-4a14-ab77-b0dd2fcb4e32", 00:17:39.322 "assigned_rate_limits": { 00:17:39.322 "rw_ios_per_sec": 0, 00:17:39.322 "rw_mbytes_per_sec": 0, 00:17:39.322 "r_mbytes_per_sec": 0, 00:17:39.322 "w_mbytes_per_sec": 0 00:17:39.322 }, 00:17:39.322 "claimed": true, 00:17:39.322 "claim_type": "exclusive_write", 00:17:39.322 "zoned": false, 00:17:39.322 "supported_io_types": { 00:17:39.322 "read": true, 00:17:39.322 "write": true, 00:17:39.322 "unmap": true, 00:17:39.322 "write_zeroes": true, 00:17:39.322 "flush": true, 00:17:39.322 "reset": true, 00:17:39.322 "compare": false, 00:17:39.322 "compare_and_write": false, 00:17:39.322 "abort": true, 00:17:39.322 "nvme_admin": false, 00:17:39.322 "nvme_io": false 00:17:39.322 }, 00:17:39.322 "memory_domains": [ 00:17:39.322 { 00:17:39.322 "dma_device_id": "system", 00:17:39.322 "dma_device_type": 1 00:17:39.322 }, 00:17:39.322 { 00:17:39.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.322 "dma_device_type": 2 00:17:39.322 } 00:17:39.322 ], 00:17:39.322 "driver_specific": {} 00:17:39.322 } 00:17:39.322 ] 00:17:39.322 00:37:12 -- common/autotest_common.sh@893 -- # return 0 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.322 00:37:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.578 00:37:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.578 "name": "Existed_Raid", 00:17:39.578 "uuid": "3c535401-c621-4e03-a726-00aaea597d8e", 00:17:39.578 "strip_size_kb": 64, 00:17:39.578 "state": "online", 00:17:39.578 "raid_level": "concat", 00:17:39.578 "superblock": false, 00:17:39.578 "num_base_bdevs": 3, 00:17:39.578 "num_base_bdevs_discovered": 3, 00:17:39.578 "num_base_bdevs_operational": 3, 00:17:39.579 "base_bdevs_list": [ 00:17:39.579 { 00:17:39.579 "name": "BaseBdev1", 00:17:39.579 "uuid": "945062d4-c9cc-4b74-9bc9-a65857601a67", 00:17:39.579 "is_configured": true, 00:17:39.579 "data_offset": 0, 00:17:39.579 "data_size": 65536 00:17:39.579 }, 00:17:39.579 { 00:17:39.579 "name": "BaseBdev2", 00:17:39.579 "uuid": "2196f71a-b250-4b39-867d-f72a8061e35b", 00:17:39.579 "is_configured": true, 00:17:39.579 "data_offset": 0, 00:17:39.579 "data_size": 65536 00:17:39.579 }, 00:17:39.579 { 00:17:39.579 "name": "BaseBdev3", 00:17:39.579 "uuid": "3e194587-1c28-4a14-ab77-b0dd2fcb4e32", 00:17:39.579 "is_configured": true, 00:17:39.579 "data_offset": 0, 00:17:39.579 "data_size": 65536 00:17:39.579 } 00:17:39.579 ] 00:17:39.579 }' 00:17:39.579 00:37:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.579 00:37:13 -- common/autotest_common.sh@10 -- # set +x 00:17:40.144 00:37:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:40.401 [2024-04-27 00:37:13.911362] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.401 [2024-04-27 00:37:13.911405] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.401 [2024-04-27 00:37:13.911482] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.658 00:37:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:40.658 00:37:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:40.658 00:37:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.658 00:37:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.658 "name": "Existed_Raid", 00:17:40.658 "uuid": "3c535401-c621-4e03-a726-00aaea597d8e", 00:17:40.658 "strip_size_kb": 64, 00:17:40.658 "state": "offline", 00:17:40.658 "raid_level": "concat", 00:17:40.658 "superblock": false, 00:17:40.658 "num_base_bdevs": 3, 00:17:40.658 "num_base_bdevs_discovered": 2, 00:17:40.658 "num_base_bdevs_operational": 2, 00:17:40.658 "base_bdevs_list": [ 00:17:40.658 { 00:17:40.658 "name": null, 00:17:40.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.658 "is_configured": false, 00:17:40.658 "data_offset": 0, 00:17:40.658 "data_size": 65536 00:17:40.658 }, 00:17:40.658 { 00:17:40.658 "name": "BaseBdev2", 00:17:40.658 "uuid": "2196f71a-b250-4b39-867d-f72a8061e35b", 00:17:40.658 "is_configured": true, 00:17:40.658 "data_offset": 0, 00:17:40.658 "data_size": 65536 00:17:40.658 }, 00:17:40.658 { 00:17:40.658 "name": "BaseBdev3", 00:17:40.658 "uuid": "3e194587-1c28-4a14-ab77-b0dd2fcb4e32", 00:17:40.659 "is_configured": true, 00:17:40.659 "data_offset": 0, 00:17:40.659 "data_size": 65536 00:17:40.659 } 00:17:40.659 ] 00:17:40.659 }' 00:17:40.659 00:37:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.659 00:37:14 -- common/autotest_common.sh@10 -- # set +x 00:17:41.591 00:37:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:41.591 00:37:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:41.591 00:37:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.591 00:37:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:41.591 00:37:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:41.591 00:37:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:41.591 00:37:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:41.850 [2024-04-27 00:37:15.404648] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:42.120 00:37:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:42.120 00:37:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:42.120 00:37:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.120 00:37:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:42.392 00:37:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:42.392 00:37:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:42.392 00:37:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:42.650 [2024-04-27 00:37:15.999406] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:42.650 [2024-04-27 00:37:15.999502] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:42.650 00:37:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:42.650 00:37:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:42.650 00:37:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.650 00:37:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:42.910 00:37:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:42.910 00:37:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:42.910 00:37:16 -- bdev/bdev_raid.sh@287 -- # killprocess 123486 00:17:42.910 00:37:16 -- common/autotest_common.sh@936 -- # '[' -z 123486 ']' 00:17:42.910 00:37:16 -- common/autotest_common.sh@940 -- # kill -0 123486 00:17:42.910 00:37:16 -- common/autotest_common.sh@941 -- # uname 00:17:42.910 00:37:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.910 00:37:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123486 00:17:42.910 killing process with pid 123486 00:17:42.910 00:37:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:42.910 00:37:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:42.910 00:37:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123486' 00:17:42.910 00:37:16 -- common/autotest_common.sh@955 -- # kill 123486 00:17:42.910 00:37:16 -- common/autotest_common.sh@960 -- # wait 123486 00:17:42.910 [2024-04-27 00:37:16.355587] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.910 [2024-04-27 00:37:16.355739] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.845 ************************************ 00:17:43.845 END TEST raid_state_function_test 00:17:43.845 ************************************ 00:17:43.845 00:37:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:43.845 00:17:43.845 real 0m12.568s 00:17:43.845 user 0m22.242s 00:17:43.845 sys 0m1.528s 00:17:43.845 00:37:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:43.845 00:37:17 -- common/autotest_common.sh@10 -- # set +x 00:17:43.845 00:37:17 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:43.845 00:37:17 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:43.845 00:37:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:43.845 00:37:17 -- common/autotest_common.sh@10 -- # set +x 00:17:44.106 ************************************ 00:17:44.106 START TEST raid_state_function_test_sb 00:17:44.106 ************************************ 00:17:44.106 00:37:17 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 3 true 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@226 -- # raid_pid=123872 00:17:44.106 Process raid pid: 123872 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123872' 00:17:44.106 00:37:17 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123872 /var/tmp/spdk-raid.sock 00:17:44.106 00:37:17 -- common/autotest_common.sh@817 -- # '[' -z 123872 ']' 00:17:44.106 00:37:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:44.106 00:37:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:44.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:44.106 00:37:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:44.106 00:37:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:44.106 00:37:17 -- common/autotest_common.sh@10 -- # set +x 00:17:44.106 [2024-04-27 00:37:17.508257] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:44.106 [2024-04-27 00:37:17.508453] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.106 [2024-04-27 00:37:17.680095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.365 [2024-04-27 00:37:17.923093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.623 [2024-04-27 00:37:18.113820] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.881 00:37:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:44.881 00:37:18 -- common/autotest_common.sh@850 -- # return 0 00:17:44.881 00:37:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:45.139 [2024-04-27 00:37:18.644859] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.139 [2024-04-27 00:37:18.645139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.139 [2024-04-27 00:37:18.645247] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.139 [2024-04-27 00:37:18.645325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.139 [2024-04-27 00:37:18.645414] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.139 [2024-04-27 00:37:18.645552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.139 00:37:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.398 00:37:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.398 "name": "Existed_Raid", 00:17:45.398 "uuid": "24219d80-6d7a-42e6-b16c-e63acef614df", 00:17:45.398 "strip_size_kb": 64, 00:17:45.398 "state": "configuring", 00:17:45.398 "raid_level": "concat", 00:17:45.398 "superblock": true, 00:17:45.398 "num_base_bdevs": 3, 00:17:45.398 "num_base_bdevs_discovered": 0, 00:17:45.398 "num_base_bdevs_operational": 3, 00:17:45.398 "base_bdevs_list": [ 00:17:45.398 { 00:17:45.398 "name": "BaseBdev1", 00:17:45.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.398 "is_configured": false, 00:17:45.398 "data_offset": 0, 00:17:45.398 "data_size": 0 00:17:45.398 }, 00:17:45.398 { 00:17:45.398 "name": "BaseBdev2", 00:17:45.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.398 "is_configured": false, 00:17:45.398 "data_offset": 0, 00:17:45.398 "data_size": 0 00:17:45.398 }, 00:17:45.398 { 00:17:45.398 "name": "BaseBdev3", 00:17:45.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.398 "is_configured": false, 00:17:45.398 "data_offset": 0, 00:17:45.398 "data_size": 0 00:17:45.398 } 00:17:45.398 ] 00:17:45.398 }' 00:17:45.398 00:37:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.398 00:37:18 -- common/autotest_common.sh@10 -- # set +x 00:17:45.965 00:37:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:46.224 [2024-04-27 00:37:19.796957] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.224 [2024-04-27 00:37:19.797178] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:17:46.483 00:37:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:46.483 [2024-04-27 00:37:20.057099] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.483 [2024-04-27 00:37:20.057451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.483 [2024-04-27 00:37:20.057567] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.483 [2024-04-27 00:37:20.057636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.483 [2024-04-27 00:37:20.057759] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.483 [2024-04-27 00:37:20.057832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.741 00:37:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.000 [2024-04-27 00:37:20.366859] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.000 BaseBdev1 00:17:47.000 00:37:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:47.000 00:37:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:47.000 00:37:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:47.000 00:37:20 -- common/autotest_common.sh@887 -- # local i 00:17:47.000 00:37:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:47.000 00:37:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:47.000 00:37:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.258 00:37:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:47.258 [ 00:17:47.258 { 00:17:47.258 "name": "BaseBdev1", 00:17:47.258 "aliases": [ 00:17:47.258 "b8c9d19c-6b2c-401c-9008-1208f0261443" 00:17:47.258 ], 00:17:47.258 "product_name": "Malloc disk", 00:17:47.258 "block_size": 512, 00:17:47.258 "num_blocks": 65536, 00:17:47.258 "uuid": "b8c9d19c-6b2c-401c-9008-1208f0261443", 00:17:47.258 "assigned_rate_limits": { 00:17:47.258 "rw_ios_per_sec": 0, 00:17:47.258 "rw_mbytes_per_sec": 0, 00:17:47.258 "r_mbytes_per_sec": 0, 00:17:47.258 "w_mbytes_per_sec": 0 00:17:47.258 }, 00:17:47.258 "claimed": true, 00:17:47.258 "claim_type": "exclusive_write", 00:17:47.258 "zoned": false, 00:17:47.258 "supported_io_types": { 00:17:47.258 "read": true, 00:17:47.258 "write": true, 00:17:47.258 "unmap": true, 00:17:47.258 "write_zeroes": true, 00:17:47.258 "flush": true, 00:17:47.259 "reset": true, 00:17:47.259 "compare": false, 00:17:47.259 "compare_and_write": false, 00:17:47.259 "abort": true, 00:17:47.259 "nvme_admin": false, 00:17:47.259 "nvme_io": false 00:17:47.259 }, 00:17:47.259 "memory_domains": [ 00:17:47.259 { 00:17:47.259 "dma_device_id": "system", 00:17:47.259 "dma_device_type": 1 00:17:47.259 }, 00:17:47.259 { 00:17:47.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.259 "dma_device_type": 2 00:17:47.259 } 00:17:47.259 ], 00:17:47.259 "driver_specific": {} 00:17:47.259 } 00:17:47.259 ] 00:17:47.259 00:37:20 -- common/autotest_common.sh@893 -- # return 0 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.259 00:37:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.517 00:37:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.517 "name": "Existed_Raid", 00:17:47.517 "uuid": "d9a5463d-cb86-440b-8929-ef8a216299ee", 00:17:47.517 "strip_size_kb": 64, 00:17:47.517 "state": "configuring", 00:17:47.517 "raid_level": "concat", 00:17:47.517 "superblock": true, 00:17:47.517 "num_base_bdevs": 3, 00:17:47.517 "num_base_bdevs_discovered": 1, 00:17:47.517 "num_base_bdevs_operational": 3, 00:17:47.517 "base_bdevs_list": [ 00:17:47.517 { 00:17:47.517 "name": "BaseBdev1", 00:17:47.517 "uuid": "b8c9d19c-6b2c-401c-9008-1208f0261443", 00:17:47.517 "is_configured": true, 00:17:47.517 "data_offset": 2048, 00:17:47.517 "data_size": 63488 00:17:47.517 }, 00:17:47.517 { 00:17:47.517 "name": "BaseBdev2", 00:17:47.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.517 "is_configured": false, 00:17:47.517 "data_offset": 0, 00:17:47.517 "data_size": 0 00:17:47.517 }, 00:17:47.517 { 00:17:47.517 "name": "BaseBdev3", 00:17:47.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.517 "is_configured": false, 00:17:47.517 "data_offset": 0, 00:17:47.517 "data_size": 0 00:17:47.517 } 00:17:47.517 ] 00:17:47.517 }' 00:17:47.517 00:37:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.517 00:37:21 -- common/autotest_common.sh@10 -- # set +x 00:17:48.085 00:37:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:48.344 [2024-04-27 00:37:21.839401] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:48.344 [2024-04-27 00:37:21.839631] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:17:48.344 00:37:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:48.344 00:37:21 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.602 00:37:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:48.860 BaseBdev1 00:17:48.860 00:37:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:48.860 00:37:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:48.860 00:37:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:48.860 00:37:22 -- common/autotest_common.sh@887 -- # local i 00:17:48.860 00:37:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:48.860 00:37:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:48.860 00:37:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:49.173 00:37:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.449 [ 00:17:49.449 { 00:17:49.449 "name": "BaseBdev1", 00:17:49.449 "aliases": [ 00:17:49.449 "1acc8b8d-bbf5-4e05-a684-5b7e2a182121" 00:17:49.449 ], 00:17:49.449 "product_name": "Malloc disk", 00:17:49.449 "block_size": 512, 00:17:49.449 "num_blocks": 65536, 00:17:49.449 "uuid": "1acc8b8d-bbf5-4e05-a684-5b7e2a182121", 00:17:49.449 "assigned_rate_limits": { 00:17:49.449 "rw_ios_per_sec": 0, 00:17:49.449 "rw_mbytes_per_sec": 0, 00:17:49.449 "r_mbytes_per_sec": 0, 00:17:49.449 "w_mbytes_per_sec": 0 00:17:49.449 }, 00:17:49.449 "claimed": false, 00:17:49.449 "zoned": false, 00:17:49.449 "supported_io_types": { 00:17:49.449 "read": true, 00:17:49.449 "write": true, 00:17:49.449 "unmap": true, 00:17:49.449 "write_zeroes": true, 00:17:49.449 "flush": true, 00:17:49.449 "reset": true, 00:17:49.449 "compare": false, 00:17:49.449 "compare_and_write": false, 00:17:49.449 "abort": true, 00:17:49.449 "nvme_admin": false, 00:17:49.449 "nvme_io": false 00:17:49.449 }, 00:17:49.449 "memory_domains": [ 00:17:49.449 { 00:17:49.449 "dma_device_id": "system", 00:17:49.449 "dma_device_type": 1 00:17:49.449 }, 00:17:49.449 { 00:17:49.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.449 "dma_device_type": 2 00:17:49.449 } 00:17:49.449 ], 00:17:49.449 "driver_specific": {} 00:17:49.449 } 00:17:49.449 ] 00:17:49.449 00:37:22 -- common/autotest_common.sh@893 -- # return 0 00:17:49.449 00:37:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:49.718 [2024-04-27 00:37:23.115680] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.718 [2024-04-27 00:37:23.117625] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.718 [2024-04-27 00:37:23.117698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.718 [2024-04-27 00:37:23.117726] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:49.718 [2024-04-27 00:37:23.117753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.718 00:37:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.976 00:37:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.976 "name": "Existed_Raid", 00:17:49.976 "uuid": "d23f4bbc-7a21-46be-adab-a8f678c8d72b", 00:17:49.976 "strip_size_kb": 64, 00:17:49.976 "state": "configuring", 00:17:49.976 "raid_level": "concat", 00:17:49.976 "superblock": true, 00:17:49.976 "num_base_bdevs": 3, 00:17:49.976 "num_base_bdevs_discovered": 1, 00:17:49.976 "num_base_bdevs_operational": 3, 00:17:49.976 "base_bdevs_list": [ 00:17:49.976 { 00:17:49.976 "name": "BaseBdev1", 00:17:49.976 "uuid": "1acc8b8d-bbf5-4e05-a684-5b7e2a182121", 00:17:49.976 "is_configured": true, 00:17:49.976 "data_offset": 2048, 00:17:49.976 "data_size": 63488 00:17:49.976 }, 00:17:49.976 { 00:17:49.976 "name": "BaseBdev2", 00:17:49.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.976 "is_configured": false, 00:17:49.976 "data_offset": 0, 00:17:49.976 "data_size": 0 00:17:49.976 }, 00:17:49.976 { 00:17:49.976 "name": "BaseBdev3", 00:17:49.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.976 "is_configured": false, 00:17:49.976 "data_offset": 0, 00:17:49.976 "data_size": 0 00:17:49.976 } 00:17:49.976 ] 00:17:49.976 }' 00:17:49.977 00:37:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.977 00:37:23 -- common/autotest_common.sh@10 -- # set +x 00:17:50.544 00:37:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:50.803 [2024-04-27 00:37:24.274767] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.803 BaseBdev2 00:17:50.803 00:37:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:50.803 00:37:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:50.803 00:37:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:50.803 00:37:24 -- common/autotest_common.sh@887 -- # local i 00:17:50.803 00:37:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:50.803 00:37:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:50.803 00:37:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.061 00:37:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.321 [ 00:17:51.321 { 00:17:51.321 "name": "BaseBdev2", 00:17:51.321 "aliases": [ 00:17:51.321 "267a23fb-7ebf-4a4c-ac1e-05912a001ce9" 00:17:51.321 ], 00:17:51.321 "product_name": "Malloc disk", 00:17:51.321 "block_size": 512, 00:17:51.321 "num_blocks": 65536, 00:17:51.321 "uuid": "267a23fb-7ebf-4a4c-ac1e-05912a001ce9", 00:17:51.321 "assigned_rate_limits": { 00:17:51.321 "rw_ios_per_sec": 0, 00:17:51.321 "rw_mbytes_per_sec": 0, 00:17:51.321 "r_mbytes_per_sec": 0, 00:17:51.321 "w_mbytes_per_sec": 0 00:17:51.321 }, 00:17:51.321 "claimed": true, 00:17:51.321 "claim_type": "exclusive_write", 00:17:51.321 "zoned": false, 00:17:51.321 "supported_io_types": { 00:17:51.321 "read": true, 00:17:51.321 "write": true, 00:17:51.321 "unmap": true, 00:17:51.321 "write_zeroes": true, 00:17:51.321 "flush": true, 00:17:51.321 "reset": true, 00:17:51.321 "compare": false, 00:17:51.321 "compare_and_write": false, 00:17:51.321 "abort": true, 00:17:51.321 "nvme_admin": false, 00:17:51.321 "nvme_io": false 00:17:51.321 }, 00:17:51.321 "memory_domains": [ 00:17:51.321 { 00:17:51.321 "dma_device_id": "system", 00:17:51.321 "dma_device_type": 1 00:17:51.321 }, 00:17:51.321 { 00:17:51.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.321 "dma_device_type": 2 00:17:51.321 } 00:17:51.321 ], 00:17:51.321 "driver_specific": {} 00:17:51.321 } 00:17:51.321 ] 00:17:51.321 00:37:24 -- common/autotest_common.sh@893 -- # return 0 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.321 00:37:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.581 00:37:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:51.581 "name": "Existed_Raid", 00:17:51.581 "uuid": "d23f4bbc-7a21-46be-adab-a8f678c8d72b", 00:17:51.581 "strip_size_kb": 64, 00:17:51.581 "state": "configuring", 00:17:51.581 "raid_level": "concat", 00:17:51.581 "superblock": true, 00:17:51.581 "num_base_bdevs": 3, 00:17:51.581 "num_base_bdevs_discovered": 2, 00:17:51.581 "num_base_bdevs_operational": 3, 00:17:51.581 "base_bdevs_list": [ 00:17:51.581 { 00:17:51.581 "name": "BaseBdev1", 00:17:51.581 "uuid": "1acc8b8d-bbf5-4e05-a684-5b7e2a182121", 00:17:51.581 "is_configured": true, 00:17:51.581 "data_offset": 2048, 00:17:51.581 "data_size": 63488 00:17:51.581 }, 00:17:51.581 { 00:17:51.581 "name": "BaseBdev2", 00:17:51.581 "uuid": "267a23fb-7ebf-4a4c-ac1e-05912a001ce9", 00:17:51.581 "is_configured": true, 00:17:51.581 "data_offset": 2048, 00:17:51.581 "data_size": 63488 00:17:51.581 }, 00:17:51.581 { 00:17:51.581 "name": "BaseBdev3", 00:17:51.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.581 "is_configured": false, 00:17:51.581 "data_offset": 0, 00:17:51.581 "data_size": 0 00:17:51.581 } 00:17:51.581 ] 00:17:51.581 }' 00:17:51.581 00:37:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:51.581 00:37:24 -- common/autotest_common.sh@10 -- # set +x 00:17:52.148 00:37:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:52.407 [2024-04-27 00:37:25.855283] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:52.407 [2024-04-27 00:37:25.855737] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:17:52.407 [2024-04-27 00:37:25.855881] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:52.407 [2024-04-27 00:37:25.856042] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:52.407 BaseBdev3 00:17:52.407 [2024-04-27 00:37:25.856492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:17:52.407 [2024-04-27 00:37:25.856705] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:17:52.407 [2024-04-27 00:37:25.857003] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.407 00:37:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:52.407 00:37:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:17:52.407 00:37:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:52.407 00:37:25 -- common/autotest_common.sh@887 -- # local i 00:17:52.407 00:37:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:52.407 00:37:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:52.407 00:37:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.666 00:37:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:52.984 [ 00:17:52.984 { 00:17:52.984 "name": "BaseBdev3", 00:17:52.984 "aliases": [ 00:17:52.984 "685a2e39-7363-4a4b-b66c-36f01a590174" 00:17:52.984 ], 00:17:52.984 "product_name": "Malloc disk", 00:17:52.984 "block_size": 512, 00:17:52.984 "num_blocks": 65536, 00:17:52.984 "uuid": "685a2e39-7363-4a4b-b66c-36f01a590174", 00:17:52.984 "assigned_rate_limits": { 00:17:52.984 "rw_ios_per_sec": 0, 00:17:52.984 "rw_mbytes_per_sec": 0, 00:17:52.984 "r_mbytes_per_sec": 0, 00:17:52.984 "w_mbytes_per_sec": 0 00:17:52.984 }, 00:17:52.984 "claimed": true, 00:17:52.984 "claim_type": "exclusive_write", 00:17:52.984 "zoned": false, 00:17:52.984 "supported_io_types": { 00:17:52.984 "read": true, 00:17:52.984 "write": true, 00:17:52.984 "unmap": true, 00:17:52.984 "write_zeroes": true, 00:17:52.984 "flush": true, 00:17:52.984 "reset": true, 00:17:52.984 "compare": false, 00:17:52.984 "compare_and_write": false, 00:17:52.984 "abort": true, 00:17:52.984 "nvme_admin": false, 00:17:52.984 "nvme_io": false 00:17:52.984 }, 00:17:52.984 "memory_domains": [ 00:17:52.984 { 00:17:52.984 "dma_device_id": "system", 00:17:52.984 "dma_device_type": 1 00:17:52.984 }, 00:17:52.984 { 00:17:52.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.984 "dma_device_type": 2 00:17:52.984 } 00:17:52.984 ], 00:17:52.984 "driver_specific": {} 00:17:52.984 } 00:17:52.984 ] 00:17:52.984 00:37:26 -- common/autotest_common.sh@893 -- # return 0 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.984 00:37:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.242 00:37:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.242 "name": "Existed_Raid", 00:17:53.242 "uuid": "d23f4bbc-7a21-46be-adab-a8f678c8d72b", 00:17:53.242 "strip_size_kb": 64, 00:17:53.242 "state": "online", 00:17:53.242 "raid_level": "concat", 00:17:53.242 "superblock": true, 00:17:53.242 "num_base_bdevs": 3, 00:17:53.242 "num_base_bdevs_discovered": 3, 00:17:53.242 "num_base_bdevs_operational": 3, 00:17:53.242 "base_bdevs_list": [ 00:17:53.242 { 00:17:53.242 "name": "BaseBdev1", 00:17:53.242 "uuid": "1acc8b8d-bbf5-4e05-a684-5b7e2a182121", 00:17:53.242 "is_configured": true, 00:17:53.242 "data_offset": 2048, 00:17:53.242 "data_size": 63488 00:17:53.242 }, 00:17:53.242 { 00:17:53.242 "name": "BaseBdev2", 00:17:53.242 "uuid": "267a23fb-7ebf-4a4c-ac1e-05912a001ce9", 00:17:53.242 "is_configured": true, 00:17:53.242 "data_offset": 2048, 00:17:53.242 "data_size": 63488 00:17:53.242 }, 00:17:53.242 { 00:17:53.242 "name": "BaseBdev3", 00:17:53.242 "uuid": "685a2e39-7363-4a4b-b66c-36f01a590174", 00:17:53.242 "is_configured": true, 00:17:53.242 "data_offset": 2048, 00:17:53.242 "data_size": 63488 00:17:53.242 } 00:17:53.242 ] 00:17:53.242 }' 00:17:53.242 00:37:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.242 00:37:26 -- common/autotest_common.sh@10 -- # set +x 00:17:53.810 00:37:27 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:54.067 [2024-04-27 00:37:27.507739] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.067 [2024-04-27 00:37:27.507897] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.067 [2024-04-27 00:37:27.508039] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.067 00:37:27 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:54.067 00:37:27 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:54.067 00:37:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:54.067 00:37:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:54.067 00:37:27 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:54.067 00:37:27 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:54.067 00:37:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.068 00:37:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.326 00:37:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.326 "name": "Existed_Raid", 00:17:54.326 "uuid": "d23f4bbc-7a21-46be-adab-a8f678c8d72b", 00:17:54.326 "strip_size_kb": 64, 00:17:54.326 "state": "offline", 00:17:54.326 "raid_level": "concat", 00:17:54.326 "superblock": true, 00:17:54.326 "num_base_bdevs": 3, 00:17:54.326 "num_base_bdevs_discovered": 2, 00:17:54.326 "num_base_bdevs_operational": 2, 00:17:54.326 "base_bdevs_list": [ 00:17:54.326 { 00:17:54.326 "name": null, 00:17:54.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.326 "is_configured": false, 00:17:54.326 "data_offset": 2048, 00:17:54.326 "data_size": 63488 00:17:54.326 }, 00:17:54.326 { 00:17:54.326 "name": "BaseBdev2", 00:17:54.326 "uuid": "267a23fb-7ebf-4a4c-ac1e-05912a001ce9", 00:17:54.326 "is_configured": true, 00:17:54.326 "data_offset": 2048, 00:17:54.326 "data_size": 63488 00:17:54.327 }, 00:17:54.327 { 00:17:54.327 "name": "BaseBdev3", 00:17:54.327 "uuid": "685a2e39-7363-4a4b-b66c-36f01a590174", 00:17:54.327 "is_configured": true, 00:17:54.327 "data_offset": 2048, 00:17:54.327 "data_size": 63488 00:17:54.327 } 00:17:54.327 ] 00:17:54.327 }' 00:17:54.327 00:37:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.327 00:37:27 -- common/autotest_common.sh@10 -- # set +x 00:17:54.893 00:37:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:54.893 00:37:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:54.893 00:37:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:54.893 00:37:28 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.152 00:37:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:55.152 00:37:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.152 00:37:28 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:55.410 [2024-04-27 00:37:28.996752] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:55.668 00:37:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:55.668 00:37:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:55.668 00:37:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.668 00:37:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:55.927 00:37:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:55.927 00:37:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.927 00:37:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:56.186 [2024-04-27 00:37:29.543040] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:56.186 [2024-04-27 00:37:29.543256] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:17:56.186 00:37:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:56.186 00:37:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:56.186 00:37:29 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.186 00:37:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:56.446 00:37:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:56.446 00:37:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:56.446 00:37:29 -- bdev/bdev_raid.sh@287 -- # killprocess 123872 00:17:56.446 00:37:29 -- common/autotest_common.sh@936 -- # '[' -z 123872 ']' 00:17:56.446 00:37:29 -- common/autotest_common.sh@940 -- # kill -0 123872 00:17:56.446 00:37:29 -- common/autotest_common.sh@941 -- # uname 00:17:56.446 00:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.446 00:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 123872 00:17:56.446 killing process with pid 123872 00:17:56.446 00:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:56.446 00:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:56.446 00:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 123872' 00:17:56.446 00:37:29 -- common/autotest_common.sh@955 -- # kill 123872 00:17:56.446 [2024-04-27 00:37:29.863525] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.446 00:37:29 -- common/autotest_common.sh@960 -- # wait 123872 00:17:56.446 [2024-04-27 00:37:29.863656] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:57.382 00:17:57.382 real 0m13.401s 00:17:57.382 user 0m23.690s 00:17:57.382 sys 0m1.631s 00:17:57.382 00:37:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:57.382 00:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:57.382 ************************************ 00:17:57.382 END TEST raid_state_function_test_sb 00:17:57.382 ************************************ 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:57.382 00:37:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:57.382 00:37:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:57.382 00:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:57.382 ************************************ 00:17:57.382 START TEST raid_superblock_test 00:17:57.382 ************************************ 00:17:57.382 00:37:30 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 3 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@357 -- # raid_pid=124273 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:57.382 00:37:30 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124273 /var/tmp/spdk-raid.sock 00:17:57.382 00:37:30 -- common/autotest_common.sh@817 -- # '[' -z 124273 ']' 00:17:57.382 00:37:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:57.382 00:37:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.382 00:37:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:57.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:57.382 00:37:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.382 00:37:30 -- common/autotest_common.sh@10 -- # set +x 00:17:57.640 [2024-04-27 00:37:31.000521] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:17:57.640 [2024-04-27 00:37:31.000735] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124273 ] 00:17:57.640 [2024-04-27 00:37:31.163357] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.899 [2024-04-27 00:37:31.356635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.156 [2024-04-27 00:37:31.525673] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.414 00:37:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.415 00:37:31 -- common/autotest_common.sh@850 -- # return 0 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.415 00:37:31 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:58.980 malloc1 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.980 [2024-04-27 00:37:32.544133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.980 [2024-04-27 00:37:32.544260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.980 [2024-04-27 00:37:32.544299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:58.980 [2024-04-27 00:37:32.544347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.980 [2024-04-27 00:37:32.546833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.980 [2024-04-27 00:37:32.546884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.980 pt1 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.980 00:37:32 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:59.239 malloc2 00:17:59.239 00:37:32 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.497 [2024-04-27 00:37:33.020263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.497 [2024-04-27 00:37:33.020367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.497 [2024-04-27 00:37:33.020412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:59.497 [2024-04-27 00:37:33.020467] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.497 [2024-04-27 00:37:33.022992] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.497 [2024-04-27 00:37:33.023042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.497 pt2 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.497 00:37:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:59.755 malloc3 00:17:59.755 00:37:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:00.013 [2024-04-27 00:37:33.509507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:00.013 [2024-04-27 00:37:33.509609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.013 [2024-04-27 00:37:33.509652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:00.013 [2024-04-27 00:37:33.509694] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.013 [2024-04-27 00:37:33.512037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.013 [2024-04-27 00:37:33.512088] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:00.013 pt3 00:18:00.013 00:37:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:00.013 00:37:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:00.013 00:37:33 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:00.271 [2024-04-27 00:37:33.757670] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.271 [2024-04-27 00:37:33.760063] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.271 [2024-04-27 00:37:33.760173] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:00.271 [2024-04-27 00:37:33.760424] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:18:00.271 [2024-04-27 00:37:33.760457] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:00.271 [2024-04-27 00:37:33.760603] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:00.271 [2024-04-27 00:37:33.761021] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:18:00.271 [2024-04-27 00:37:33.761044] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:18:00.271 [2024-04-27 00:37:33.761267] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.271 00:37:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.530 00:37:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.530 "name": "raid_bdev1", 00:18:00.530 "uuid": "bcacc142-6baf-4e56-8414-544a9ddeedf8", 00:18:00.530 "strip_size_kb": 64, 00:18:00.530 "state": "online", 00:18:00.530 "raid_level": "concat", 00:18:00.530 "superblock": true, 00:18:00.530 "num_base_bdevs": 3, 00:18:00.530 "num_base_bdevs_discovered": 3, 00:18:00.530 "num_base_bdevs_operational": 3, 00:18:00.530 "base_bdevs_list": [ 00:18:00.530 { 00:18:00.530 "name": "pt1", 00:18:00.530 "uuid": "50d7354e-f970-54c0-aec7-a5ff784c3cab", 00:18:00.530 "is_configured": true, 00:18:00.530 "data_offset": 2048, 00:18:00.530 "data_size": 63488 00:18:00.530 }, 00:18:00.530 { 00:18:00.530 "name": "pt2", 00:18:00.530 "uuid": "24e36dd0-64c6-51a8-aae5-5529589786ee", 00:18:00.530 "is_configured": true, 00:18:00.530 "data_offset": 2048, 00:18:00.530 "data_size": 63488 00:18:00.530 }, 00:18:00.530 { 00:18:00.530 "name": "pt3", 00:18:00.530 "uuid": "6d3a965e-8f1a-5e18-99e7-d86cc4557eb1", 00:18:00.530 "is_configured": true, 00:18:00.530 "data_offset": 2048, 00:18:00.530 "data_size": 63488 00:18:00.530 } 00:18:00.530 ] 00:18:00.530 }' 00:18:00.530 00:37:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.530 00:37:33 -- common/autotest_common.sh@10 -- # set +x 00:18:01.097 00:37:34 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.097 00:37:34 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:01.356 [2024-04-27 00:37:34.814058] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.356 00:37:34 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bcacc142-6baf-4e56-8414-544a9ddeedf8 00:18:01.356 00:37:34 -- bdev/bdev_raid.sh@380 -- # '[' -z bcacc142-6baf-4e56-8414-544a9ddeedf8 ']' 00:18:01.356 00:37:34 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:01.614 [2024-04-27 00:37:35.025829] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.614 [2024-04-27 00:37:35.025903] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.614 [2024-04-27 00:37:35.026005] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.614 [2024-04-27 00:37:35.026090] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.614 [2024-04-27 00:37:35.026103] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:18:01.614 00:37:35 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:01.614 00:37:35 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.873 00:37:35 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:01.873 00:37:35 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:01.873 00:37:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:01.873 00:37:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:02.132 00:37:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.132 00:37:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:02.132 00:37:35 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.132 00:37:35 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:02.390 00:37:35 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:02.390 00:37:35 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:02.649 00:37:36 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:02.649 00:37:36 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:02.649 00:37:36 -- common/autotest_common.sh@638 -- # local es=0 00:18:02.650 00:37:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:02.650 00:37:36 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.650 00:37:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.650 00:37:36 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.650 00:37:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.650 00:37:36 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.650 00:37:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:02.650 00:37:36 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.650 00:37:36 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:02.650 00:37:36 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:02.908 [2024-04-27 00:37:36.358207] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:02.908 [2024-04-27 00:37:36.360229] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:02.908 [2024-04-27 00:37:36.360399] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:02.908 [2024-04-27 00:37:36.360502] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:02.908 [2024-04-27 00:37:36.360805] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:02.909 [2024-04-27 00:37:36.360961] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:02.909 [2024-04-27 00:37:36.361047] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.909 [2024-04-27 00:37:36.361086] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:18:02.909 request: 00:18:02.909 { 00:18:02.909 "name": "raid_bdev1", 00:18:02.909 "raid_level": "concat", 00:18:02.909 "base_bdevs": [ 00:18:02.909 "malloc1", 00:18:02.909 "malloc2", 00:18:02.909 "malloc3" 00:18:02.909 ], 00:18:02.909 "superblock": false, 00:18:02.909 "strip_size_kb": 64, 00:18:02.909 "method": "bdev_raid_create", 00:18:02.909 "req_id": 1 00:18:02.909 } 00:18:02.909 Got JSON-RPC error response 00:18:02.909 response: 00:18:02.909 { 00:18:02.909 "code": -17, 00:18:02.909 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:02.909 } 00:18:02.909 00:37:36 -- common/autotest_common.sh@641 -- # es=1 00:18:02.909 00:37:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:02.909 00:37:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:02.909 00:37:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:02.909 00:37:36 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.909 00:37:36 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:03.168 00:37:36 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:03.168 00:37:36 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:03.168 00:37:36 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.428 [2024-04-27 00:37:36.762216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.428 [2024-04-27 00:37:36.762453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.428 [2024-04-27 00:37:36.762531] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:03.428 [2024-04-27 00:37:36.762785] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.428 [2024-04-27 00:37:36.764956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.428 [2024-04-27 00:37:36.765129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.428 [2024-04-27 00:37:36.765355] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:03.428 [2024-04-27 00:37:36.765498] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.428 pt1 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.428 00:37:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.692 00:37:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.692 "name": "raid_bdev1", 00:18:03.692 "uuid": "bcacc142-6baf-4e56-8414-544a9ddeedf8", 00:18:03.692 "strip_size_kb": 64, 00:18:03.692 "state": "configuring", 00:18:03.692 "raid_level": "concat", 00:18:03.692 "superblock": true, 00:18:03.692 "num_base_bdevs": 3, 00:18:03.692 "num_base_bdevs_discovered": 1, 00:18:03.692 "num_base_bdevs_operational": 3, 00:18:03.692 "base_bdevs_list": [ 00:18:03.692 { 00:18:03.692 "name": "pt1", 00:18:03.692 "uuid": "50d7354e-f970-54c0-aec7-a5ff784c3cab", 00:18:03.692 "is_configured": true, 00:18:03.692 "data_offset": 2048, 00:18:03.692 "data_size": 63488 00:18:03.692 }, 00:18:03.692 { 00:18:03.692 "name": null, 00:18:03.692 "uuid": "24e36dd0-64c6-51a8-aae5-5529589786ee", 00:18:03.692 "is_configured": false, 00:18:03.692 "data_offset": 2048, 00:18:03.692 "data_size": 63488 00:18:03.692 }, 00:18:03.692 { 00:18:03.692 "name": null, 00:18:03.692 "uuid": "6d3a965e-8f1a-5e18-99e7-d86cc4557eb1", 00:18:03.692 "is_configured": false, 00:18:03.692 "data_offset": 2048, 00:18:03.692 "data_size": 63488 00:18:03.692 } 00:18:03.692 ] 00:18:03.692 }' 00:18:03.692 00:37:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.692 00:37:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.257 00:37:37 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:04.257 00:37:37 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.515 [2024-04-27 00:37:37.894542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.515 [2024-04-27 00:37:37.894860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.515 [2024-04-27 00:37:37.895038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:04.515 [2024-04-27 00:37:37.895181] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.515 [2024-04-27 00:37:37.895789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.515 [2024-04-27 00:37:37.895992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.515 [2024-04-27 00:37:37.896264] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:04.515 [2024-04-27 00:37:37.896393] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.515 pt2 00:18:04.515 00:37:37 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:04.515 [2024-04-27 00:37:38.098683] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.773 00:37:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.774 00:37:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.774 00:37:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.774 "name": "raid_bdev1", 00:18:04.774 "uuid": "bcacc142-6baf-4e56-8414-544a9ddeedf8", 00:18:04.774 "strip_size_kb": 64, 00:18:04.774 "state": "configuring", 00:18:04.774 "raid_level": "concat", 00:18:04.774 "superblock": true, 00:18:04.774 "num_base_bdevs": 3, 00:18:04.774 "num_base_bdevs_discovered": 1, 00:18:04.774 "num_base_bdevs_operational": 3, 00:18:04.774 "base_bdevs_list": [ 00:18:04.774 { 00:18:04.774 "name": "pt1", 00:18:04.774 "uuid": "50d7354e-f970-54c0-aec7-a5ff784c3cab", 00:18:04.774 "is_configured": true, 00:18:04.774 "data_offset": 2048, 00:18:04.774 "data_size": 63488 00:18:04.774 }, 00:18:04.774 { 00:18:04.774 "name": null, 00:18:04.774 "uuid": "24e36dd0-64c6-51a8-aae5-5529589786ee", 00:18:04.774 "is_configured": false, 00:18:04.774 "data_offset": 2048, 00:18:04.774 "data_size": 63488 00:18:04.774 }, 00:18:04.774 { 00:18:04.774 "name": null, 00:18:04.774 "uuid": "6d3a965e-8f1a-5e18-99e7-d86cc4557eb1", 00:18:04.774 "is_configured": false, 00:18:04.774 "data_offset": 2048, 00:18:04.774 "data_size": 63488 00:18:04.774 } 00:18:04.774 ] 00:18:04.774 }' 00:18:04.774 00:37:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.774 00:37:38 -- common/autotest_common.sh@10 -- # set +x 00:18:05.342 00:37:38 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:05.342 00:37:38 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:05.342 00:37:38 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.601 [2024-04-27 00:37:39.170861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.601 [2024-04-27 00:37:39.171134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.601 [2024-04-27 00:37:39.171299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:05.601 [2024-04-27 00:37:39.171421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.601 [2024-04-27 00:37:39.172054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.601 [2024-04-27 00:37:39.172223] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.601 [2024-04-27 00:37:39.172447] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:05.601 [2024-04-27 00:37:39.172577] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:05.601 pt2 00:18:05.601 00:37:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:05.601 00:37:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:05.860 00:37:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:06.118 [2024-04-27 00:37:39.471000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:06.118 [2024-04-27 00:37:39.471366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.118 [2024-04-27 00:37:39.471517] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:06.118 [2024-04-27 00:37:39.471647] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.118 [2024-04-27 00:37:39.472199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.118 [2024-04-27 00:37:39.472413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:06.118 [2024-04-27 00:37:39.472658] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:06.118 [2024-04-27 00:37:39.472776] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:06.118 [2024-04-27 00:37:39.472971] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:06.118 [2024-04-27 00:37:39.473077] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:06.118 [2024-04-27 00:37:39.473347] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:06.118 [2024-04-27 00:37:39.473808] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:06.118 [2024-04-27 00:37:39.473931] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:18:06.118 [2024-04-27 00:37:39.474219] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.118 pt3 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.118 "name": "raid_bdev1", 00:18:06.118 "uuid": "bcacc142-6baf-4e56-8414-544a9ddeedf8", 00:18:06.118 "strip_size_kb": 64, 00:18:06.118 "state": "online", 00:18:06.118 "raid_level": "concat", 00:18:06.118 "superblock": true, 00:18:06.118 "num_base_bdevs": 3, 00:18:06.118 "num_base_bdevs_discovered": 3, 00:18:06.118 "num_base_bdevs_operational": 3, 00:18:06.118 "base_bdevs_list": [ 00:18:06.118 { 00:18:06.118 "name": "pt1", 00:18:06.118 "uuid": "50d7354e-f970-54c0-aec7-a5ff784c3cab", 00:18:06.118 "is_configured": true, 00:18:06.118 "data_offset": 2048, 00:18:06.118 "data_size": 63488 00:18:06.118 }, 00:18:06.118 { 00:18:06.118 "name": "pt2", 00:18:06.118 "uuid": "24e36dd0-64c6-51a8-aae5-5529589786ee", 00:18:06.118 "is_configured": true, 00:18:06.118 "data_offset": 2048, 00:18:06.118 "data_size": 63488 00:18:06.118 }, 00:18:06.118 { 00:18:06.118 "name": "pt3", 00:18:06.118 "uuid": "6d3a965e-8f1a-5e18-99e7-d86cc4557eb1", 00:18:06.118 "is_configured": true, 00:18:06.118 "data_offset": 2048, 00:18:06.118 "data_size": 63488 00:18:06.118 } 00:18:06.118 ] 00:18:06.118 }' 00:18:06.118 00:37:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.118 00:37:39 -- common/autotest_common.sh@10 -- # set +x 00:18:07.054 00:37:40 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:07.054 00:37:40 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:07.054 [2024-04-27 00:37:40.551535] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.054 00:37:40 -- bdev/bdev_raid.sh@430 -- # '[' bcacc142-6baf-4e56-8414-544a9ddeedf8 '!=' bcacc142-6baf-4e56-8414-544a9ddeedf8 ']' 00:18:07.054 00:37:40 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:07.054 00:37:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:07.054 00:37:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:07.054 00:37:40 -- bdev/bdev_raid.sh@511 -- # killprocess 124273 00:18:07.054 00:37:40 -- common/autotest_common.sh@936 -- # '[' -z 124273 ']' 00:18:07.054 00:37:40 -- common/autotest_common.sh@940 -- # kill -0 124273 00:18:07.054 00:37:40 -- common/autotest_common.sh@941 -- # uname 00:18:07.054 00:37:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.054 00:37:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124273 00:18:07.054 killing process with pid 124273 00:18:07.054 00:37:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:07.054 00:37:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:07.054 00:37:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124273' 00:18:07.054 00:37:40 -- common/autotest_common.sh@955 -- # kill 124273 00:18:07.054 [2024-04-27 00:37:40.598678] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.055 00:37:40 -- common/autotest_common.sh@960 -- # wait 124273 00:18:07.055 [2024-04-27 00:37:40.598786] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.055 [2024-04-27 00:37:40.598856] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.055 [2024-04-27 00:37:40.598867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:18:07.313 [2024-04-27 00:37:40.815833] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.252 00:37:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:08.252 00:18:08.252 real 0m10.870s 00:18:08.252 user 0m18.848s 00:18:08.252 sys 0m1.341s 00:18:08.252 00:37:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:08.252 ************************************ 00:18:08.252 END TEST raid_superblock_test 00:18:08.252 ************************************ 00:18:08.252 00:37:41 -- common/autotest_common.sh@10 -- # set +x 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:18:08.511 00:37:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:08.511 00:37:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:08.511 00:37:41 -- common/autotest_common.sh@10 -- # set +x 00:18:08.511 ************************************ 00:18:08.511 START TEST raid_state_function_test 00:18:08.511 ************************************ 00:18:08.511 00:37:41 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 false 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=124589 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:08.511 Process raid pid: 124589 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124589' 00:18:08.511 00:37:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124589 /var/tmp/spdk-raid.sock 00:18:08.511 00:37:41 -- common/autotest_common.sh@817 -- # '[' -z 124589 ']' 00:18:08.511 00:37:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:08.511 00:37:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.511 00:37:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:08.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:08.511 00:37:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.511 00:37:41 -- common/autotest_common.sh@10 -- # set +x 00:18:08.511 [2024-04-27 00:37:41.960715] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:08.511 [2024-04-27 00:37:41.961108] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.770 [2024-04-27 00:37:42.116576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.770 [2024-04-27 00:37:42.312132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.029 [2024-04-27 00:37:42.500306] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.595 00:37:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.595 00:37:42 -- common/autotest_common.sh@850 -- # return 0 00:18:09.595 00:37:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:09.852 [2024-04-27 00:37:43.233148] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.852 [2024-04-27 00:37:43.233470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.852 [2024-04-27 00:37:43.233626] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.852 [2024-04-27 00:37:43.233695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.852 [2024-04-27 00:37:43.233801] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.852 [2024-04-27 00:37:43.233954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.852 00:37:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.111 00:37:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.111 "name": "Existed_Raid", 00:18:10.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.111 "strip_size_kb": 0, 00:18:10.111 "state": "configuring", 00:18:10.111 "raid_level": "raid1", 00:18:10.111 "superblock": false, 00:18:10.111 "num_base_bdevs": 3, 00:18:10.111 "num_base_bdevs_discovered": 0, 00:18:10.111 "num_base_bdevs_operational": 3, 00:18:10.111 "base_bdevs_list": [ 00:18:10.111 { 00:18:10.111 "name": "BaseBdev1", 00:18:10.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.111 "is_configured": false, 00:18:10.111 "data_offset": 0, 00:18:10.111 "data_size": 0 00:18:10.111 }, 00:18:10.111 { 00:18:10.111 "name": "BaseBdev2", 00:18:10.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.111 "is_configured": false, 00:18:10.111 "data_offset": 0, 00:18:10.111 "data_size": 0 00:18:10.111 }, 00:18:10.111 { 00:18:10.111 "name": "BaseBdev3", 00:18:10.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.111 "is_configured": false, 00:18:10.111 "data_offset": 0, 00:18:10.111 "data_size": 0 00:18:10.111 } 00:18:10.111 ] 00:18:10.111 }' 00:18:10.111 00:37:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.111 00:37:43 -- common/autotest_common.sh@10 -- # set +x 00:18:10.679 00:37:44 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:10.940 [2024-04-27 00:37:44.401234] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.940 [2024-04-27 00:37:44.401420] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:18:10.940 00:37:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:11.199 [2024-04-27 00:37:44.665305] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.199 [2024-04-27 00:37:44.665607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.199 [2024-04-27 00:37:44.665743] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.199 [2024-04-27 00:37:44.665805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.199 [2024-04-27 00:37:44.665920] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:11.199 [2024-04-27 00:37:44.665991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:11.199 00:37:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:11.458 [2024-04-27 00:37:44.914539] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:11.458 BaseBdev1 00:18:11.458 00:37:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:11.458 00:37:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:11.458 00:37:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:11.458 00:37:44 -- common/autotest_common.sh@887 -- # local i 00:18:11.458 00:37:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:11.458 00:37:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:11.458 00:37:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.716 00:37:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:11.975 [ 00:18:11.975 { 00:18:11.975 "name": "BaseBdev1", 00:18:11.975 "aliases": [ 00:18:11.975 "a68b4476-e506-42b5-aee9-0a8624edbdb7" 00:18:11.975 ], 00:18:11.975 "product_name": "Malloc disk", 00:18:11.975 "block_size": 512, 00:18:11.975 "num_blocks": 65536, 00:18:11.975 "uuid": "a68b4476-e506-42b5-aee9-0a8624edbdb7", 00:18:11.975 "assigned_rate_limits": { 00:18:11.975 "rw_ios_per_sec": 0, 00:18:11.975 "rw_mbytes_per_sec": 0, 00:18:11.975 "r_mbytes_per_sec": 0, 00:18:11.975 "w_mbytes_per_sec": 0 00:18:11.975 }, 00:18:11.975 "claimed": true, 00:18:11.975 "claim_type": "exclusive_write", 00:18:11.975 "zoned": false, 00:18:11.975 "supported_io_types": { 00:18:11.975 "read": true, 00:18:11.975 "write": true, 00:18:11.975 "unmap": true, 00:18:11.975 "write_zeroes": true, 00:18:11.975 "flush": true, 00:18:11.975 "reset": true, 00:18:11.975 "compare": false, 00:18:11.975 "compare_and_write": false, 00:18:11.975 "abort": true, 00:18:11.975 "nvme_admin": false, 00:18:11.975 "nvme_io": false 00:18:11.975 }, 00:18:11.975 "memory_domains": [ 00:18:11.975 { 00:18:11.975 "dma_device_id": "system", 00:18:11.975 "dma_device_type": 1 00:18:11.975 }, 00:18:11.975 { 00:18:11.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.975 "dma_device_type": 2 00:18:11.975 } 00:18:11.975 ], 00:18:11.975 "driver_specific": {} 00:18:11.975 } 00:18:11.975 ] 00:18:11.975 00:37:45 -- common/autotest_common.sh@893 -- # return 0 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.975 00:37:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.234 00:37:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.234 "name": "Existed_Raid", 00:18:12.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.234 "strip_size_kb": 0, 00:18:12.234 "state": "configuring", 00:18:12.234 "raid_level": "raid1", 00:18:12.234 "superblock": false, 00:18:12.234 "num_base_bdevs": 3, 00:18:12.234 "num_base_bdevs_discovered": 1, 00:18:12.234 "num_base_bdevs_operational": 3, 00:18:12.234 "base_bdevs_list": [ 00:18:12.234 { 00:18:12.234 "name": "BaseBdev1", 00:18:12.234 "uuid": "a68b4476-e506-42b5-aee9-0a8624edbdb7", 00:18:12.234 "is_configured": true, 00:18:12.234 "data_offset": 0, 00:18:12.234 "data_size": 65536 00:18:12.234 }, 00:18:12.234 { 00:18:12.234 "name": "BaseBdev2", 00:18:12.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.234 "is_configured": false, 00:18:12.234 "data_offset": 0, 00:18:12.234 "data_size": 0 00:18:12.234 }, 00:18:12.234 { 00:18:12.234 "name": "BaseBdev3", 00:18:12.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.234 "is_configured": false, 00:18:12.234 "data_offset": 0, 00:18:12.234 "data_size": 0 00:18:12.234 } 00:18:12.234 ] 00:18:12.234 }' 00:18:12.234 00:37:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.234 00:37:45 -- common/autotest_common.sh@10 -- # set +x 00:18:12.802 00:37:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:13.062 [2024-04-27 00:37:46.618998] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.062 [2024-04-27 00:37:46.619304] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:18:13.062 00:37:46 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:13.062 00:37:46 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:13.321 [2024-04-27 00:37:46.879129] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.321 [2024-04-27 00:37:46.881329] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.321 [2024-04-27 00:37:46.881542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.321 [2024-04-27 00:37:46.881714] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:13.321 [2024-04-27 00:37:46.881843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.321 00:37:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.580 00:37:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.580 "name": "Existed_Raid", 00:18:13.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.580 "strip_size_kb": 0, 00:18:13.580 "state": "configuring", 00:18:13.580 "raid_level": "raid1", 00:18:13.580 "superblock": false, 00:18:13.580 "num_base_bdevs": 3, 00:18:13.580 "num_base_bdevs_discovered": 1, 00:18:13.580 "num_base_bdevs_operational": 3, 00:18:13.580 "base_bdevs_list": [ 00:18:13.580 { 00:18:13.580 "name": "BaseBdev1", 00:18:13.580 "uuid": "a68b4476-e506-42b5-aee9-0a8624edbdb7", 00:18:13.580 "is_configured": true, 00:18:13.580 "data_offset": 0, 00:18:13.580 "data_size": 65536 00:18:13.580 }, 00:18:13.580 { 00:18:13.580 "name": "BaseBdev2", 00:18:13.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.580 "is_configured": false, 00:18:13.580 "data_offset": 0, 00:18:13.580 "data_size": 0 00:18:13.580 }, 00:18:13.580 { 00:18:13.580 "name": "BaseBdev3", 00:18:13.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.580 "is_configured": false, 00:18:13.580 "data_offset": 0, 00:18:13.580 "data_size": 0 00:18:13.580 } 00:18:13.580 ] 00:18:13.580 }' 00:18:13.580 00:37:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.580 00:37:47 -- common/autotest_common.sh@10 -- # set +x 00:18:14.517 00:37:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.517 [2024-04-27 00:37:48.013788] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.517 BaseBdev2 00:18:14.517 00:37:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:14.517 00:37:48 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:14.517 00:37:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:14.517 00:37:48 -- common/autotest_common.sh@887 -- # local i 00:18:14.517 00:37:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:14.517 00:37:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:14.517 00:37:48 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.776 00:37:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.034 [ 00:18:15.034 { 00:18:15.034 "name": "BaseBdev2", 00:18:15.035 "aliases": [ 00:18:15.035 "35b612bb-bd85-40f2-9555-eaee97acedbc" 00:18:15.035 ], 00:18:15.035 "product_name": "Malloc disk", 00:18:15.035 "block_size": 512, 00:18:15.035 "num_blocks": 65536, 00:18:15.035 "uuid": "35b612bb-bd85-40f2-9555-eaee97acedbc", 00:18:15.035 "assigned_rate_limits": { 00:18:15.035 "rw_ios_per_sec": 0, 00:18:15.035 "rw_mbytes_per_sec": 0, 00:18:15.035 "r_mbytes_per_sec": 0, 00:18:15.035 "w_mbytes_per_sec": 0 00:18:15.035 }, 00:18:15.035 "claimed": true, 00:18:15.035 "claim_type": "exclusive_write", 00:18:15.035 "zoned": false, 00:18:15.035 "supported_io_types": { 00:18:15.035 "read": true, 00:18:15.035 "write": true, 00:18:15.035 "unmap": true, 00:18:15.035 "write_zeroes": true, 00:18:15.035 "flush": true, 00:18:15.035 "reset": true, 00:18:15.035 "compare": false, 00:18:15.035 "compare_and_write": false, 00:18:15.035 "abort": true, 00:18:15.035 "nvme_admin": false, 00:18:15.035 "nvme_io": false 00:18:15.035 }, 00:18:15.035 "memory_domains": [ 00:18:15.035 { 00:18:15.035 "dma_device_id": "system", 00:18:15.035 "dma_device_type": 1 00:18:15.035 }, 00:18:15.035 { 00:18:15.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.035 "dma_device_type": 2 00:18:15.035 } 00:18:15.035 ], 00:18:15.035 "driver_specific": {} 00:18:15.035 } 00:18:15.035 ] 00:18:15.035 00:37:48 -- common/autotest_common.sh@893 -- # return 0 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.035 00:37:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.293 00:37:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.293 "name": "Existed_Raid", 00:18:15.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.294 "strip_size_kb": 0, 00:18:15.294 "state": "configuring", 00:18:15.294 "raid_level": "raid1", 00:18:15.294 "superblock": false, 00:18:15.294 "num_base_bdevs": 3, 00:18:15.294 "num_base_bdevs_discovered": 2, 00:18:15.294 "num_base_bdevs_operational": 3, 00:18:15.294 "base_bdevs_list": [ 00:18:15.294 { 00:18:15.294 "name": "BaseBdev1", 00:18:15.294 "uuid": "a68b4476-e506-42b5-aee9-0a8624edbdb7", 00:18:15.294 "is_configured": true, 00:18:15.294 "data_offset": 0, 00:18:15.294 "data_size": 65536 00:18:15.294 }, 00:18:15.294 { 00:18:15.294 "name": "BaseBdev2", 00:18:15.294 "uuid": "35b612bb-bd85-40f2-9555-eaee97acedbc", 00:18:15.294 "is_configured": true, 00:18:15.294 "data_offset": 0, 00:18:15.294 "data_size": 65536 00:18:15.294 }, 00:18:15.294 { 00:18:15.294 "name": "BaseBdev3", 00:18:15.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.294 "is_configured": false, 00:18:15.294 "data_offset": 0, 00:18:15.294 "data_size": 0 00:18:15.294 } 00:18:15.294 ] 00:18:15.294 }' 00:18:15.294 00:37:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.294 00:37:48 -- common/autotest_common.sh@10 -- # set +x 00:18:15.861 00:37:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:16.428 [2024-04-27 00:37:49.710442] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.428 [2024-04-27 00:37:49.710889] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:16.428 [2024-04-27 00:37:49.710943] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:16.428 [2024-04-27 00:37:49.711233] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:16.428 [2024-04-27 00:37:49.711802] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:16.428 [2024-04-27 00:37:49.711945] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:18:16.428 [2024-04-27 00:37:49.712426] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.428 BaseBdev3 00:18:16.428 00:37:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:16.428 00:37:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:16.428 00:37:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:16.428 00:37:49 -- common/autotest_common.sh@887 -- # local i 00:18:16.428 00:37:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:16.428 00:37:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:16.428 00:37:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.428 00:37:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:16.687 [ 00:18:16.687 { 00:18:16.687 "name": "BaseBdev3", 00:18:16.687 "aliases": [ 00:18:16.687 "5bd5468c-1c9f-4fe7-9296-72d22778c6c9" 00:18:16.687 ], 00:18:16.687 "product_name": "Malloc disk", 00:18:16.687 "block_size": 512, 00:18:16.687 "num_blocks": 65536, 00:18:16.687 "uuid": "5bd5468c-1c9f-4fe7-9296-72d22778c6c9", 00:18:16.687 "assigned_rate_limits": { 00:18:16.687 "rw_ios_per_sec": 0, 00:18:16.687 "rw_mbytes_per_sec": 0, 00:18:16.687 "r_mbytes_per_sec": 0, 00:18:16.687 "w_mbytes_per_sec": 0 00:18:16.687 }, 00:18:16.687 "claimed": true, 00:18:16.687 "claim_type": "exclusive_write", 00:18:16.687 "zoned": false, 00:18:16.687 "supported_io_types": { 00:18:16.687 "read": true, 00:18:16.687 "write": true, 00:18:16.687 "unmap": true, 00:18:16.687 "write_zeroes": true, 00:18:16.687 "flush": true, 00:18:16.687 "reset": true, 00:18:16.687 "compare": false, 00:18:16.687 "compare_and_write": false, 00:18:16.687 "abort": true, 00:18:16.687 "nvme_admin": false, 00:18:16.687 "nvme_io": false 00:18:16.687 }, 00:18:16.687 "memory_domains": [ 00:18:16.687 { 00:18:16.687 "dma_device_id": "system", 00:18:16.687 "dma_device_type": 1 00:18:16.687 }, 00:18:16.687 { 00:18:16.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.688 "dma_device_type": 2 00:18:16.688 } 00:18:16.688 ], 00:18:16.688 "driver_specific": {} 00:18:16.688 } 00:18:16.688 ] 00:18:16.688 00:37:50 -- common/autotest_common.sh@893 -- # return 0 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.688 00:37:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.946 00:37:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.946 "name": "Existed_Raid", 00:18:16.946 "uuid": "f4211889-ae22-4919-91b0-a3c6ff121f80", 00:18:16.946 "strip_size_kb": 0, 00:18:16.946 "state": "online", 00:18:16.946 "raid_level": "raid1", 00:18:16.946 "superblock": false, 00:18:16.946 "num_base_bdevs": 3, 00:18:16.946 "num_base_bdevs_discovered": 3, 00:18:16.946 "num_base_bdevs_operational": 3, 00:18:16.946 "base_bdevs_list": [ 00:18:16.946 { 00:18:16.946 "name": "BaseBdev1", 00:18:16.946 "uuid": "a68b4476-e506-42b5-aee9-0a8624edbdb7", 00:18:16.946 "is_configured": true, 00:18:16.946 "data_offset": 0, 00:18:16.946 "data_size": 65536 00:18:16.946 }, 00:18:16.946 { 00:18:16.946 "name": "BaseBdev2", 00:18:16.946 "uuid": "35b612bb-bd85-40f2-9555-eaee97acedbc", 00:18:16.946 "is_configured": true, 00:18:16.946 "data_offset": 0, 00:18:16.946 "data_size": 65536 00:18:16.946 }, 00:18:16.946 { 00:18:16.946 "name": "BaseBdev3", 00:18:16.946 "uuid": "5bd5468c-1c9f-4fe7-9296-72d22778c6c9", 00:18:16.946 "is_configured": true, 00:18:16.946 "data_offset": 0, 00:18:16.946 "data_size": 65536 00:18:16.946 } 00:18:16.946 ] 00:18:16.946 }' 00:18:16.946 00:37:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.946 00:37:50 -- common/autotest_common.sh@10 -- # set +x 00:18:17.513 00:37:51 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:17.772 [2024-04-27 00:37:51.255192] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.772 00:37:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.050 00:37:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.050 00:37:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.050 "name": "Existed_Raid", 00:18:18.050 "uuid": "f4211889-ae22-4919-91b0-a3c6ff121f80", 00:18:18.050 "strip_size_kb": 0, 00:18:18.050 "state": "online", 00:18:18.050 "raid_level": "raid1", 00:18:18.050 "superblock": false, 00:18:18.050 "num_base_bdevs": 3, 00:18:18.050 "num_base_bdevs_discovered": 2, 00:18:18.050 "num_base_bdevs_operational": 2, 00:18:18.050 "base_bdevs_list": [ 00:18:18.050 { 00:18:18.050 "name": null, 00:18:18.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.050 "is_configured": false, 00:18:18.050 "data_offset": 0, 00:18:18.050 "data_size": 65536 00:18:18.050 }, 00:18:18.050 { 00:18:18.050 "name": "BaseBdev2", 00:18:18.050 "uuid": "35b612bb-bd85-40f2-9555-eaee97acedbc", 00:18:18.050 "is_configured": true, 00:18:18.050 "data_offset": 0, 00:18:18.050 "data_size": 65536 00:18:18.050 }, 00:18:18.050 { 00:18:18.050 "name": "BaseBdev3", 00:18:18.050 "uuid": "5bd5468c-1c9f-4fe7-9296-72d22778c6c9", 00:18:18.050 "is_configured": true, 00:18:18.050 "data_offset": 0, 00:18:18.050 "data_size": 65536 00:18:18.050 } 00:18:18.050 ] 00:18:18.050 }' 00:18:18.050 00:37:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.050 00:37:51 -- common/autotest_common.sh@10 -- # set +x 00:18:18.991 00:37:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:18.991 00:37:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:18.991 00:37:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.991 00:37:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:18.991 00:37:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:18.991 00:37:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:18.991 00:37:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:19.250 [2024-04-27 00:37:52.752520] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:19.509 00:37:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:19.509 00:37:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:19.509 00:37:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:19.509 00:37:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.509 00:37:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:19.509 00:37:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:19.509 00:37:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:19.767 [2024-04-27 00:37:53.260031] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:19.767 [2024-04-27 00:37:53.260162] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.767 [2024-04-27 00:37:53.333193] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.767 [2024-04-27 00:37:53.333374] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.767 [2024-04-27 00:37:53.333392] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:18:19.767 00:37:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:19.767 00:37:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:19.767 00:37:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.767 00:37:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:20.026 00:37:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:20.026 00:37:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:20.026 00:37:53 -- bdev/bdev_raid.sh@287 -- # killprocess 124589 00:18:20.026 00:37:53 -- common/autotest_common.sh@936 -- # '[' -z 124589 ']' 00:18:20.026 00:37:53 -- common/autotest_common.sh@940 -- # kill -0 124589 00:18:20.026 00:37:53 -- common/autotest_common.sh@941 -- # uname 00:18:20.026 00:37:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:20.026 00:37:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124589 00:18:20.026 killing process with pid 124589 00:18:20.026 00:37:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:20.026 00:37:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:20.026 00:37:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124589' 00:18:20.026 00:37:53 -- common/autotest_common.sh@955 -- # kill 124589 00:18:20.026 00:37:53 -- common/autotest_common.sh@960 -- # wait 124589 00:18:20.026 [2024-04-27 00:37:53.601053] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.026 [2024-04-27 00:37:53.601211] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:21.402 ************************************ 00:18:21.402 END TEST raid_state_function_test 00:18:21.402 ************************************ 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:21.402 00:18:21.402 real 0m12.832s 00:18:21.402 user 0m22.398s 00:18:21.402 sys 0m1.673s 00:18:21.402 00:37:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:21.402 00:37:54 -- common/autotest_common.sh@10 -- # set +x 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:18:21.402 00:37:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:21.402 00:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:21.402 00:37:54 -- common/autotest_common.sh@10 -- # set +x 00:18:21.402 ************************************ 00:18:21.402 START TEST raid_state_function_test_sb 00:18:21.402 ************************************ 00:18:21.402 00:37:54 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 3 true 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=124982 00:18:21.402 Process raid pid: 124982 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124982' 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:21.402 00:37:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124982 /var/tmp/spdk-raid.sock 00:18:21.402 00:37:54 -- common/autotest_common.sh@817 -- # '[' -z 124982 ']' 00:18:21.402 00:37:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:21.402 00:37:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:21.402 00:37:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:21.402 00:37:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.402 00:37:54 -- common/autotest_common.sh@10 -- # set +x 00:18:21.402 [2024-04-27 00:37:54.886745] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:21.402 [2024-04-27 00:37:54.886979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.660 [2024-04-27 00:37:55.046319] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.919 [2024-04-27 00:37:55.255748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.919 [2024-04-27 00:37:55.459499] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.487 00:37:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.487 00:37:55 -- common/autotest_common.sh@850 -- # return 0 00:18:22.487 00:37:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:22.487 [2024-04-27 00:37:56.027375] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.487 [2024-04-27 00:37:56.027509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.487 [2024-04-27 00:37:56.027536] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.487 [2024-04-27 00:37:56.027561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.487 [2024-04-27 00:37:56.027570] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:22.487 [2024-04-27 00:37:56.027630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.487 00:37:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.745 00:37:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.745 "name": "Existed_Raid", 00:18:22.745 "uuid": "da1cde62-8429-4a46-88b9-c908c9b741bf", 00:18:22.745 "strip_size_kb": 0, 00:18:22.745 "state": "configuring", 00:18:22.745 "raid_level": "raid1", 00:18:22.745 "superblock": true, 00:18:22.745 "num_base_bdevs": 3, 00:18:22.745 "num_base_bdevs_discovered": 0, 00:18:22.745 "num_base_bdevs_operational": 3, 00:18:22.745 "base_bdevs_list": [ 00:18:22.745 { 00:18:22.745 "name": "BaseBdev1", 00:18:22.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.745 "is_configured": false, 00:18:22.745 "data_offset": 0, 00:18:22.745 "data_size": 0 00:18:22.745 }, 00:18:22.745 { 00:18:22.745 "name": "BaseBdev2", 00:18:22.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.745 "is_configured": false, 00:18:22.745 "data_offset": 0, 00:18:22.745 "data_size": 0 00:18:22.745 }, 00:18:22.745 { 00:18:22.745 "name": "BaseBdev3", 00:18:22.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.745 "is_configured": false, 00:18:22.745 "data_offset": 0, 00:18:22.745 "data_size": 0 00:18:22.745 } 00:18:22.745 ] 00:18:22.745 }' 00:18:22.745 00:37:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.745 00:37:56 -- common/autotest_common.sh@10 -- # set +x 00:18:23.678 00:37:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.678 [2024-04-27 00:37:57.183477] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.678 [2024-04-27 00:37:57.183555] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:18:23.678 00:37:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:23.937 [2024-04-27 00:37:57.427498] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.937 [2024-04-27 00:37:57.427593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.937 [2024-04-27 00:37:57.427608] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.937 [2024-04-27 00:37:57.427630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.937 [2024-04-27 00:37:57.427639] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.937 [2024-04-27 00:37:57.427668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.937 00:37:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:24.195 [2024-04-27 00:37:57.719764] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.195 BaseBdev1 00:18:24.195 00:37:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:24.195 00:37:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:24.195 00:37:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:24.195 00:37:57 -- common/autotest_common.sh@887 -- # local i 00:18:24.195 00:37:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:24.195 00:37:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:24.195 00:37:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.454 00:37:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.714 [ 00:18:24.714 { 00:18:24.714 "name": "BaseBdev1", 00:18:24.714 "aliases": [ 00:18:24.714 "260490ad-f1f2-4819-9847-5fc3342423a8" 00:18:24.714 ], 00:18:24.714 "product_name": "Malloc disk", 00:18:24.714 "block_size": 512, 00:18:24.714 "num_blocks": 65536, 00:18:24.714 "uuid": "260490ad-f1f2-4819-9847-5fc3342423a8", 00:18:24.714 "assigned_rate_limits": { 00:18:24.714 "rw_ios_per_sec": 0, 00:18:24.714 "rw_mbytes_per_sec": 0, 00:18:24.714 "r_mbytes_per_sec": 0, 00:18:24.714 "w_mbytes_per_sec": 0 00:18:24.714 }, 00:18:24.714 "claimed": true, 00:18:24.714 "claim_type": "exclusive_write", 00:18:24.714 "zoned": false, 00:18:24.714 "supported_io_types": { 00:18:24.714 "read": true, 00:18:24.714 "write": true, 00:18:24.714 "unmap": true, 00:18:24.714 "write_zeroes": true, 00:18:24.714 "flush": true, 00:18:24.714 "reset": true, 00:18:24.714 "compare": false, 00:18:24.714 "compare_and_write": false, 00:18:24.714 "abort": true, 00:18:24.714 "nvme_admin": false, 00:18:24.714 "nvme_io": false 00:18:24.714 }, 00:18:24.714 "memory_domains": [ 00:18:24.714 { 00:18:24.714 "dma_device_id": "system", 00:18:24.714 "dma_device_type": 1 00:18:24.714 }, 00:18:24.714 { 00:18:24.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.714 "dma_device_type": 2 00:18:24.714 } 00:18:24.714 ], 00:18:24.714 "driver_specific": {} 00:18:24.714 } 00:18:24.714 ] 00:18:24.714 00:37:58 -- common/autotest_common.sh@893 -- # return 0 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.714 00:37:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.973 00:37:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:24.973 "name": "Existed_Raid", 00:18:24.973 "uuid": "b58d5b8e-ea3d-4f92-8ca4-bc216d5a0752", 00:18:24.973 "strip_size_kb": 0, 00:18:24.973 "state": "configuring", 00:18:24.973 "raid_level": "raid1", 00:18:24.973 "superblock": true, 00:18:24.973 "num_base_bdevs": 3, 00:18:24.973 "num_base_bdevs_discovered": 1, 00:18:24.973 "num_base_bdevs_operational": 3, 00:18:24.973 "base_bdevs_list": [ 00:18:24.973 { 00:18:24.973 "name": "BaseBdev1", 00:18:24.973 "uuid": "260490ad-f1f2-4819-9847-5fc3342423a8", 00:18:24.973 "is_configured": true, 00:18:24.973 "data_offset": 2048, 00:18:24.973 "data_size": 63488 00:18:24.973 }, 00:18:24.973 { 00:18:24.973 "name": "BaseBdev2", 00:18:24.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.973 "is_configured": false, 00:18:24.973 "data_offset": 0, 00:18:24.973 "data_size": 0 00:18:24.973 }, 00:18:24.973 { 00:18:24.973 "name": "BaseBdev3", 00:18:24.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.973 "is_configured": false, 00:18:24.973 "data_offset": 0, 00:18:24.973 "data_size": 0 00:18:24.973 } 00:18:24.973 ] 00:18:24.973 }' 00:18:24.973 00:37:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:24.973 00:37:58 -- common/autotest_common.sh@10 -- # set +x 00:18:25.633 00:37:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.892 [2024-04-27 00:37:59.380218] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.892 [2024-04-27 00:37:59.380299] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:18:25.892 00:37:59 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:25.892 00:37:59 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:26.151 00:37:59 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:26.409 BaseBdev1 00:18:26.409 00:37:59 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:26.409 00:37:59 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:26.409 00:37:59 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:26.409 00:37:59 -- common/autotest_common.sh@887 -- # local i 00:18:26.409 00:37:59 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:26.409 00:37:59 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:26.409 00:37:59 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.667 00:38:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:26.925 [ 00:18:26.925 { 00:18:26.925 "name": "BaseBdev1", 00:18:26.925 "aliases": [ 00:18:26.925 "3d4893ad-7404-4a1b-aa2e-53a951874827" 00:18:26.925 ], 00:18:26.925 "product_name": "Malloc disk", 00:18:26.925 "block_size": 512, 00:18:26.925 "num_blocks": 65536, 00:18:26.925 "uuid": "3d4893ad-7404-4a1b-aa2e-53a951874827", 00:18:26.925 "assigned_rate_limits": { 00:18:26.925 "rw_ios_per_sec": 0, 00:18:26.925 "rw_mbytes_per_sec": 0, 00:18:26.925 "r_mbytes_per_sec": 0, 00:18:26.925 "w_mbytes_per_sec": 0 00:18:26.925 }, 00:18:26.925 "claimed": false, 00:18:26.925 "zoned": false, 00:18:26.925 "supported_io_types": { 00:18:26.925 "read": true, 00:18:26.925 "write": true, 00:18:26.925 "unmap": true, 00:18:26.925 "write_zeroes": true, 00:18:26.925 "flush": true, 00:18:26.925 "reset": true, 00:18:26.925 "compare": false, 00:18:26.925 "compare_and_write": false, 00:18:26.925 "abort": true, 00:18:26.925 "nvme_admin": false, 00:18:26.925 "nvme_io": false 00:18:26.925 }, 00:18:26.925 "memory_domains": [ 00:18:26.925 { 00:18:26.925 "dma_device_id": "system", 00:18:26.925 "dma_device_type": 1 00:18:26.925 }, 00:18:26.925 { 00:18:26.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.925 "dma_device_type": 2 00:18:26.925 } 00:18:26.925 ], 00:18:26.925 "driver_specific": {} 00:18:26.925 } 00:18:26.925 ] 00:18:26.925 00:38:00 -- common/autotest_common.sh@893 -- # return 0 00:18:26.925 00:38:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:27.183 [2024-04-27 00:38:00.659596] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:27.183 [2024-04-27 00:38:00.661615] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.183 [2024-04-27 00:38:00.661689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.183 [2024-04-27 00:38:00.661717] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.183 [2024-04-27 00:38:00.661741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.183 00:38:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.442 00:38:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.442 "name": "Existed_Raid", 00:18:27.442 "uuid": "9e257825-ec50-4e41-b1f9-c47a8661457c", 00:18:27.442 "strip_size_kb": 0, 00:18:27.442 "state": "configuring", 00:18:27.442 "raid_level": "raid1", 00:18:27.442 "superblock": true, 00:18:27.442 "num_base_bdevs": 3, 00:18:27.442 "num_base_bdevs_discovered": 1, 00:18:27.442 "num_base_bdevs_operational": 3, 00:18:27.442 "base_bdevs_list": [ 00:18:27.442 { 00:18:27.442 "name": "BaseBdev1", 00:18:27.442 "uuid": "3d4893ad-7404-4a1b-aa2e-53a951874827", 00:18:27.442 "is_configured": true, 00:18:27.442 "data_offset": 2048, 00:18:27.442 "data_size": 63488 00:18:27.442 }, 00:18:27.442 { 00:18:27.442 "name": "BaseBdev2", 00:18:27.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.442 "is_configured": false, 00:18:27.442 "data_offset": 0, 00:18:27.442 "data_size": 0 00:18:27.442 }, 00:18:27.442 { 00:18:27.442 "name": "BaseBdev3", 00:18:27.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.442 "is_configured": false, 00:18:27.442 "data_offset": 0, 00:18:27.442 "data_size": 0 00:18:27.442 } 00:18:27.442 ] 00:18:27.442 }' 00:18:27.442 00:38:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.442 00:38:00 -- common/autotest_common.sh@10 -- # set +x 00:18:28.016 00:38:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:28.276 [2024-04-27 00:38:01.770853] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.276 BaseBdev2 00:18:28.276 00:38:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:28.276 00:38:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:28.276 00:38:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:28.276 00:38:01 -- common/autotest_common.sh@887 -- # local i 00:18:28.276 00:38:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:28.276 00:38:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:28.276 00:38:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.534 00:38:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.792 [ 00:18:28.792 { 00:18:28.792 "name": "BaseBdev2", 00:18:28.792 "aliases": [ 00:18:28.792 "5be6e294-d35d-4306-9aea-99f652b9771d" 00:18:28.792 ], 00:18:28.792 "product_name": "Malloc disk", 00:18:28.792 "block_size": 512, 00:18:28.792 "num_blocks": 65536, 00:18:28.792 "uuid": "5be6e294-d35d-4306-9aea-99f652b9771d", 00:18:28.792 "assigned_rate_limits": { 00:18:28.792 "rw_ios_per_sec": 0, 00:18:28.792 "rw_mbytes_per_sec": 0, 00:18:28.792 "r_mbytes_per_sec": 0, 00:18:28.792 "w_mbytes_per_sec": 0 00:18:28.792 }, 00:18:28.792 "claimed": true, 00:18:28.792 "claim_type": "exclusive_write", 00:18:28.792 "zoned": false, 00:18:28.792 "supported_io_types": { 00:18:28.792 "read": true, 00:18:28.792 "write": true, 00:18:28.792 "unmap": true, 00:18:28.792 "write_zeroes": true, 00:18:28.792 "flush": true, 00:18:28.792 "reset": true, 00:18:28.792 "compare": false, 00:18:28.792 "compare_and_write": false, 00:18:28.792 "abort": true, 00:18:28.792 "nvme_admin": false, 00:18:28.792 "nvme_io": false 00:18:28.792 }, 00:18:28.792 "memory_domains": [ 00:18:28.792 { 00:18:28.792 "dma_device_id": "system", 00:18:28.792 "dma_device_type": 1 00:18:28.792 }, 00:18:28.792 { 00:18:28.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.792 "dma_device_type": 2 00:18:28.792 } 00:18:28.792 ], 00:18:28.792 "driver_specific": {} 00:18:28.792 } 00:18:28.792 ] 00:18:28.792 00:38:02 -- common/autotest_common.sh@893 -- # return 0 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.792 00:38:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.050 00:38:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.050 "name": "Existed_Raid", 00:18:29.050 "uuid": "9e257825-ec50-4e41-b1f9-c47a8661457c", 00:18:29.050 "strip_size_kb": 0, 00:18:29.050 "state": "configuring", 00:18:29.050 "raid_level": "raid1", 00:18:29.050 "superblock": true, 00:18:29.050 "num_base_bdevs": 3, 00:18:29.050 "num_base_bdevs_discovered": 2, 00:18:29.050 "num_base_bdevs_operational": 3, 00:18:29.050 "base_bdevs_list": [ 00:18:29.050 { 00:18:29.050 "name": "BaseBdev1", 00:18:29.050 "uuid": "3d4893ad-7404-4a1b-aa2e-53a951874827", 00:18:29.050 "is_configured": true, 00:18:29.050 "data_offset": 2048, 00:18:29.050 "data_size": 63488 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "name": "BaseBdev2", 00:18:29.050 "uuid": "5be6e294-d35d-4306-9aea-99f652b9771d", 00:18:29.050 "is_configured": true, 00:18:29.050 "data_offset": 2048, 00:18:29.050 "data_size": 63488 00:18:29.050 }, 00:18:29.050 { 00:18:29.050 "name": "BaseBdev3", 00:18:29.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.050 "is_configured": false, 00:18:29.050 "data_offset": 0, 00:18:29.050 "data_size": 0 00:18:29.050 } 00:18:29.050 ] 00:18:29.050 }' 00:18:29.050 00:38:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.050 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:18:29.677 00:38:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:29.934 [2024-04-27 00:38:03.337576] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.934 [2024-04-27 00:38:03.337833] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:29.934 [2024-04-27 00:38:03.337849] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:29.934 [2024-04-27 00:38:03.338038] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:29.934 [2024-04-27 00:38:03.338477] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:29.934 [2024-04-27 00:38:03.338495] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:18:29.934 [2024-04-27 00:38:03.338661] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.934 BaseBdev3 00:18:29.934 00:38:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:29.934 00:38:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:29.934 00:38:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:29.934 00:38:03 -- common/autotest_common.sh@887 -- # local i 00:18:29.934 00:38:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:29.934 00:38:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:29.934 00:38:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:30.192 00:38:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:30.192 [ 00:18:30.192 { 00:18:30.192 "name": "BaseBdev3", 00:18:30.192 "aliases": [ 00:18:30.192 "6d5e94dd-827d-4c0f-af1e-5a22dcd98f79" 00:18:30.192 ], 00:18:30.192 "product_name": "Malloc disk", 00:18:30.192 "block_size": 512, 00:18:30.192 "num_blocks": 65536, 00:18:30.192 "uuid": "6d5e94dd-827d-4c0f-af1e-5a22dcd98f79", 00:18:30.192 "assigned_rate_limits": { 00:18:30.192 "rw_ios_per_sec": 0, 00:18:30.192 "rw_mbytes_per_sec": 0, 00:18:30.192 "r_mbytes_per_sec": 0, 00:18:30.192 "w_mbytes_per_sec": 0 00:18:30.192 }, 00:18:30.192 "claimed": true, 00:18:30.192 "claim_type": "exclusive_write", 00:18:30.192 "zoned": false, 00:18:30.192 "supported_io_types": { 00:18:30.192 "read": true, 00:18:30.192 "write": true, 00:18:30.192 "unmap": true, 00:18:30.192 "write_zeroes": true, 00:18:30.192 "flush": true, 00:18:30.192 "reset": true, 00:18:30.192 "compare": false, 00:18:30.192 "compare_and_write": false, 00:18:30.192 "abort": true, 00:18:30.192 "nvme_admin": false, 00:18:30.192 "nvme_io": false 00:18:30.192 }, 00:18:30.192 "memory_domains": [ 00:18:30.192 { 00:18:30.192 "dma_device_id": "system", 00:18:30.192 "dma_device_type": 1 00:18:30.192 }, 00:18:30.192 { 00:18:30.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.192 "dma_device_type": 2 00:18:30.192 } 00:18:30.192 ], 00:18:30.192 "driver_specific": {} 00:18:30.192 } 00:18:30.192 ] 00:18:30.192 00:38:03 -- common/autotest_common.sh@893 -- # return 0 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.192 00:38:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.757 00:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.757 "name": "Existed_Raid", 00:18:30.757 "uuid": "9e257825-ec50-4e41-b1f9-c47a8661457c", 00:18:30.757 "strip_size_kb": 0, 00:18:30.757 "state": "online", 00:18:30.757 "raid_level": "raid1", 00:18:30.757 "superblock": true, 00:18:30.757 "num_base_bdevs": 3, 00:18:30.757 "num_base_bdevs_discovered": 3, 00:18:30.757 "num_base_bdevs_operational": 3, 00:18:30.757 "base_bdevs_list": [ 00:18:30.757 { 00:18:30.757 "name": "BaseBdev1", 00:18:30.757 "uuid": "3d4893ad-7404-4a1b-aa2e-53a951874827", 00:18:30.757 "is_configured": true, 00:18:30.757 "data_offset": 2048, 00:18:30.757 "data_size": 63488 00:18:30.757 }, 00:18:30.757 { 00:18:30.757 "name": "BaseBdev2", 00:18:30.757 "uuid": "5be6e294-d35d-4306-9aea-99f652b9771d", 00:18:30.757 "is_configured": true, 00:18:30.757 "data_offset": 2048, 00:18:30.757 "data_size": 63488 00:18:30.757 }, 00:18:30.757 { 00:18:30.757 "name": "BaseBdev3", 00:18:30.757 "uuid": "6d5e94dd-827d-4c0f-af1e-5a22dcd98f79", 00:18:30.757 "is_configured": true, 00:18:30.757 "data_offset": 2048, 00:18:30.757 "data_size": 63488 00:18:30.757 } 00:18:30.757 ] 00:18:30.757 }' 00:18:30.757 00:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.757 00:38:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.322 00:38:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:31.322 [2024-04-27 00:38:04.838178] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.602 00:38:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.869 00:38:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.869 "name": "Existed_Raid", 00:18:31.869 "uuid": "9e257825-ec50-4e41-b1f9-c47a8661457c", 00:18:31.869 "strip_size_kb": 0, 00:18:31.869 "state": "online", 00:18:31.869 "raid_level": "raid1", 00:18:31.869 "superblock": true, 00:18:31.869 "num_base_bdevs": 3, 00:18:31.869 "num_base_bdevs_discovered": 2, 00:18:31.869 "num_base_bdevs_operational": 2, 00:18:31.869 "base_bdevs_list": [ 00:18:31.869 { 00:18:31.869 "name": null, 00:18:31.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.869 "is_configured": false, 00:18:31.869 "data_offset": 2048, 00:18:31.869 "data_size": 63488 00:18:31.869 }, 00:18:31.869 { 00:18:31.869 "name": "BaseBdev2", 00:18:31.869 "uuid": "5be6e294-d35d-4306-9aea-99f652b9771d", 00:18:31.869 "is_configured": true, 00:18:31.869 "data_offset": 2048, 00:18:31.869 "data_size": 63488 00:18:31.869 }, 00:18:31.869 { 00:18:31.869 "name": "BaseBdev3", 00:18:31.869 "uuid": "6d5e94dd-827d-4c0f-af1e-5a22dcd98f79", 00:18:31.869 "is_configured": true, 00:18:31.869 "data_offset": 2048, 00:18:31.869 "data_size": 63488 00:18:31.869 } 00:18:31.869 ] 00:18:31.869 }' 00:18:31.869 00:38:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.869 00:38:05 -- common/autotest_common.sh@10 -- # set +x 00:18:32.436 00:38:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:32.436 00:38:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.436 00:38:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.436 00:38:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.695 00:38:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.695 00:38:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.695 00:38:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:32.953 [2024-04-27 00:38:06.357183] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:32.953 00:38:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.953 00:38:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.953 00:38:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.953 00:38:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:33.212 00:38:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:33.212 00:38:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.212 00:38:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:33.470 [2024-04-27 00:38:07.013775] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:33.470 [2024-04-27 00:38:07.013946] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.729 [2024-04-27 00:38:07.094282] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.729 [2024-04-27 00:38:07.094478] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.729 [2024-04-27 00:38:07.094495] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:18:33.729 00:38:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:33.729 00:38:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:33.729 00:38:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.729 00:38:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.988 00:38:07 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:33.988 00:38:07 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:33.988 00:38:07 -- bdev/bdev_raid.sh@287 -- # killprocess 124982 00:18:33.988 00:38:07 -- common/autotest_common.sh@936 -- # '[' -z 124982 ']' 00:18:33.988 00:38:07 -- common/autotest_common.sh@940 -- # kill -0 124982 00:18:33.988 00:38:07 -- common/autotest_common.sh@941 -- # uname 00:18:33.988 00:38:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.988 00:38:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 124982 00:18:33.988 killing process with pid 124982 00:18:33.988 00:38:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:33.988 00:38:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:33.988 00:38:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 124982' 00:18:33.988 00:38:07 -- common/autotest_common.sh@955 -- # kill 124982 00:18:33.988 00:38:07 -- common/autotest_common.sh@960 -- # wait 124982 00:18:33.988 [2024-04-27 00:38:07.374699] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.988 [2024-04-27 00:38:07.374896] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.926 00:38:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:34.926 00:18:34.926 real 0m13.624s 00:18:34.926 user 0m24.038s 00:18:34.926 sys 0m1.593s 00:18:34.926 00:38:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:34.926 ************************************ 00:18:34.926 END TEST raid_state_function_test_sb 00:18:34.926 ************************************ 00:18:34.926 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:18:34.926 00:38:08 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:18:34.926 00:38:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:34.926 00:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:34.926 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:18:35.185 ************************************ 00:18:35.185 START TEST raid_superblock_test 00:18:35.185 ************************************ 00:18:35.185 00:38:08 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 3 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@357 -- # raid_pid=125386 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125386 /var/tmp/spdk-raid.sock 00:18:35.185 00:38:08 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:35.185 00:38:08 -- common/autotest_common.sh@817 -- # '[' -z 125386 ']' 00:18:35.185 00:38:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:35.185 00:38:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:35.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:35.185 00:38:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:35.185 00:38:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:35.185 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:18:35.185 [2024-04-27 00:38:08.597619] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:35.185 [2024-04-27 00:38:08.597839] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125386 ] 00:18:35.185 [2024-04-27 00:38:08.758572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.445 [2024-04-27 00:38:08.941023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.703 [2024-04-27 00:38:09.112643] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.962 00:38:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:35.962 00:38:09 -- common/autotest_common.sh@850 -- # return 0 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.962 00:38:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:36.221 malloc1 00:18:36.221 00:38:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:36.480 [2024-04-27 00:38:09.970503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:36.480 [2024-04-27 00:38:09.970674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.480 [2024-04-27 00:38:09.970731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:36.480 [2024-04-27 00:38:09.970806] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.480 [2024-04-27 00:38:09.973946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.480 [2024-04-27 00:38:09.974030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:36.480 pt1 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.480 00:38:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:36.739 malloc2 00:18:36.739 00:38:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.998 [2024-04-27 00:38:10.425617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.999 [2024-04-27 00:38:10.425716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.999 [2024-04-27 00:38:10.425762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:36.999 [2024-04-27 00:38:10.425816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.999 [2024-04-27 00:38:10.428262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.999 [2024-04-27 00:38:10.428326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.999 pt2 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.999 00:38:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:37.257 malloc3 00:18:37.257 00:38:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:37.516 [2024-04-27 00:38:10.886560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:37.516 [2024-04-27 00:38:10.886678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.516 [2024-04-27 00:38:10.886743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:37.516 [2024-04-27 00:38:10.886789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.516 [2024-04-27 00:38:10.889086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.516 [2024-04-27 00:38:10.889156] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:37.516 pt3 00:18:37.516 00:38:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:37.516 00:38:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:37.516 00:38:10 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:37.516 [2024-04-27 00:38:11.086613] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.516 [2024-04-27 00:38:11.088692] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.516 [2024-04-27 00:38:11.088783] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.516 [2024-04-27 00:38:11.089031] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:18:37.516 [2024-04-27 00:38:11.089059] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:37.516 [2024-04-27 00:38:11.089194] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:37.516 [2024-04-27 00:38:11.089574] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:18:37.516 [2024-04-27 00:38:11.089613] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:18:37.516 [2024-04-27 00:38:11.089797] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.516 00:38:11 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:37.516 00:38:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:37.516 00:38:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.516 00:38:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:37.516 00:38:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:37.516 00:38:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.775 "name": "raid_bdev1", 00:18:37.775 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:37.775 "strip_size_kb": 0, 00:18:37.775 "state": "online", 00:18:37.775 "raid_level": "raid1", 00:18:37.775 "superblock": true, 00:18:37.775 "num_base_bdevs": 3, 00:18:37.775 "num_base_bdevs_discovered": 3, 00:18:37.775 "num_base_bdevs_operational": 3, 00:18:37.775 "base_bdevs_list": [ 00:18:37.775 { 00:18:37.775 "name": "pt1", 00:18:37.775 "uuid": "e32a17b2-f751-5766-af80-389313aea5e8", 00:18:37.775 "is_configured": true, 00:18:37.775 "data_offset": 2048, 00:18:37.775 "data_size": 63488 00:18:37.775 }, 00:18:37.775 { 00:18:37.775 "name": "pt2", 00:18:37.775 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:37.775 "is_configured": true, 00:18:37.775 "data_offset": 2048, 00:18:37.775 "data_size": 63488 00:18:37.775 }, 00:18:37.775 { 00:18:37.775 "name": "pt3", 00:18:37.775 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:37.775 "is_configured": true, 00:18:37.775 "data_offset": 2048, 00:18:37.775 "data_size": 63488 00:18:37.775 } 00:18:37.775 ] 00:18:37.775 }' 00:18:37.775 00:38:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.775 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:18:38.710 00:38:11 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:38.710 00:38:11 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:38.710 [2024-04-27 00:38:12.191181] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.710 00:38:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=98cde73b-cd02-4a5e-b204-9ca27972ea11 00:18:38.710 00:38:12 -- bdev/bdev_raid.sh@380 -- # '[' -z 98cde73b-cd02-4a5e-b204-9ca27972ea11 ']' 00:18:38.710 00:38:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.969 [2024-04-27 00:38:12.450974] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.969 [2024-04-27 00:38:12.451008] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.969 [2024-04-27 00:38:12.451108] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.969 [2024-04-27 00:38:12.451206] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.969 [2024-04-27 00:38:12.451220] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:18:38.969 00:38:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.969 00:38:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:39.227 00:38:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:39.227 00:38:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:39.227 00:38:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.227 00:38:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:39.503 00:38:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.503 00:38:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:39.760 00:38:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.760 00:38:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:40.017 00:38:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:40.017 00:38:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:40.275 00:38:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:40.275 00:38:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:40.275 00:38:13 -- common/autotest_common.sh@638 -- # local es=0 00:18:40.275 00:38:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:40.275 00:38:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.275 00:38:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:40.275 00:38:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.275 00:38:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:40.275 00:38:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.275 00:38:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:40.275 00:38:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.275 00:38:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:40.275 00:38:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:40.275 [2024-04-27 00:38:13.819268] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:40.275 [2024-04-27 00:38:13.821331] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:40.275 [2024-04-27 00:38:13.821426] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:40.275 [2024-04-27 00:38:13.821488] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:40.275 [2024-04-27 00:38:13.821640] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:40.275 [2024-04-27 00:38:13.821685] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:40.275 [2024-04-27 00:38:13.821738] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.275 [2024-04-27 00:38:13.821762] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:18:40.275 request: 00:18:40.275 { 00:18:40.275 "name": "raid_bdev1", 00:18:40.275 "raid_level": "raid1", 00:18:40.275 "base_bdevs": [ 00:18:40.275 "malloc1", 00:18:40.275 "malloc2", 00:18:40.275 "malloc3" 00:18:40.275 ], 00:18:40.275 "superblock": false, 00:18:40.275 "method": "bdev_raid_create", 00:18:40.275 "req_id": 1 00:18:40.275 } 00:18:40.275 Got JSON-RPC error response 00:18:40.275 response: 00:18:40.275 { 00:18:40.275 "code": -17, 00:18:40.275 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:40.275 } 00:18:40.275 00:38:13 -- common/autotest_common.sh@641 -- # es=1 00:18:40.275 00:38:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:40.275 00:38:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:40.275 00:38:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:40.275 00:38:13 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.275 00:38:13 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:40.533 00:38:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:40.533 00:38:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:40.533 00:38:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.791 [2024-04-27 00:38:14.303369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.791 [2024-04-27 00:38:14.303498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.791 [2024-04-27 00:38:14.303543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:40.791 [2024-04-27 00:38:14.303582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.791 [2024-04-27 00:38:14.306031] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.791 [2024-04-27 00:38:14.306098] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.791 [2024-04-27 00:38:14.306242] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:40.791 [2024-04-27 00:38:14.306354] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.791 pt1 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.791 00:38:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.050 00:38:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.050 "name": "raid_bdev1", 00:18:41.050 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:41.050 "strip_size_kb": 0, 00:18:41.050 "state": "configuring", 00:18:41.050 "raid_level": "raid1", 00:18:41.050 "superblock": true, 00:18:41.050 "num_base_bdevs": 3, 00:18:41.050 "num_base_bdevs_discovered": 1, 00:18:41.050 "num_base_bdevs_operational": 3, 00:18:41.050 "base_bdevs_list": [ 00:18:41.050 { 00:18:41.050 "name": "pt1", 00:18:41.050 "uuid": "e32a17b2-f751-5766-af80-389313aea5e8", 00:18:41.050 "is_configured": true, 00:18:41.050 "data_offset": 2048, 00:18:41.050 "data_size": 63488 00:18:41.050 }, 00:18:41.050 { 00:18:41.050 "name": null, 00:18:41.050 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:41.050 "is_configured": false, 00:18:41.050 "data_offset": 2048, 00:18:41.050 "data_size": 63488 00:18:41.050 }, 00:18:41.050 { 00:18:41.050 "name": null, 00:18:41.050 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:41.050 "is_configured": false, 00:18:41.050 "data_offset": 2048, 00:18:41.050 "data_size": 63488 00:18:41.050 } 00:18:41.050 ] 00:18:41.050 }' 00:18:41.050 00:38:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.050 00:38:14 -- common/autotest_common.sh@10 -- # set +x 00:18:41.617 00:38:15 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:41.617 00:38:15 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.875 [2024-04-27 00:38:15.359711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.875 [2024-04-27 00:38:15.359845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.875 [2024-04-27 00:38:15.359899] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:41.875 [2024-04-27 00:38:15.359926] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.875 [2024-04-27 00:38:15.360513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.875 [2024-04-27 00:38:15.360559] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.875 [2024-04-27 00:38:15.360689] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:41.875 [2024-04-27 00:38:15.360717] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.875 pt2 00:18:41.875 00:38:15 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:42.134 [2024-04-27 00:38:15.619849] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.134 00:38:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.392 00:38:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:42.392 "name": "raid_bdev1", 00:18:42.392 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:42.392 "strip_size_kb": 0, 00:18:42.392 "state": "configuring", 00:18:42.392 "raid_level": "raid1", 00:18:42.392 "superblock": true, 00:18:42.392 "num_base_bdevs": 3, 00:18:42.392 "num_base_bdevs_discovered": 1, 00:18:42.392 "num_base_bdevs_operational": 3, 00:18:42.392 "base_bdevs_list": [ 00:18:42.392 { 00:18:42.392 "name": "pt1", 00:18:42.392 "uuid": "e32a17b2-f751-5766-af80-389313aea5e8", 00:18:42.392 "is_configured": true, 00:18:42.392 "data_offset": 2048, 00:18:42.392 "data_size": 63488 00:18:42.392 }, 00:18:42.392 { 00:18:42.392 "name": null, 00:18:42.392 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:42.392 "is_configured": false, 00:18:42.392 "data_offset": 2048, 00:18:42.392 "data_size": 63488 00:18:42.392 }, 00:18:42.392 { 00:18:42.392 "name": null, 00:18:42.392 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:42.392 "is_configured": false, 00:18:42.392 "data_offset": 2048, 00:18:42.392 "data_size": 63488 00:18:42.392 } 00:18:42.392 ] 00:18:42.392 }' 00:18:42.392 00:38:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:42.392 00:38:15 -- common/autotest_common.sh@10 -- # set +x 00:18:43.327 00:38:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:43.327 00:38:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:43.327 00:38:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:43.327 [2024-04-27 00:38:16.772043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:43.327 [2024-04-27 00:38:16.772159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.327 [2024-04-27 00:38:16.772201] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:43.327 [2024-04-27 00:38:16.772230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.327 [2024-04-27 00:38:16.772695] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.327 [2024-04-27 00:38:16.772731] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:43.327 [2024-04-27 00:38:16.772845] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:43.327 [2024-04-27 00:38:16.772869] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:43.327 pt2 00:18:43.327 00:38:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:43.327 00:38:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:43.327 00:38:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:43.586 [2024-04-27 00:38:17.032114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:43.586 [2024-04-27 00:38:17.032211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.586 [2024-04-27 00:38:17.032251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:43.586 [2024-04-27 00:38:17.032280] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.586 [2024-04-27 00:38:17.032742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.586 [2024-04-27 00:38:17.032782] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:43.586 [2024-04-27 00:38:17.032904] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:43.586 [2024-04-27 00:38:17.032930] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:43.586 [2024-04-27 00:38:17.033075] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:18:43.586 [2024-04-27 00:38:17.033089] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:43.586 [2024-04-27 00:38:17.033187] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:43.587 [2024-04-27 00:38:17.033540] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:18:43.587 [2024-04-27 00:38:17.033565] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:18:43.587 [2024-04-27 00:38:17.033709] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.587 pt3 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.587 00:38:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.845 00:38:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.845 "name": "raid_bdev1", 00:18:43.845 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:43.845 "strip_size_kb": 0, 00:18:43.845 "state": "online", 00:18:43.846 "raid_level": "raid1", 00:18:43.846 "superblock": true, 00:18:43.846 "num_base_bdevs": 3, 00:18:43.846 "num_base_bdevs_discovered": 3, 00:18:43.846 "num_base_bdevs_operational": 3, 00:18:43.846 "base_bdevs_list": [ 00:18:43.846 { 00:18:43.846 "name": "pt1", 00:18:43.846 "uuid": "e32a17b2-f751-5766-af80-389313aea5e8", 00:18:43.846 "is_configured": true, 00:18:43.846 "data_offset": 2048, 00:18:43.846 "data_size": 63488 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "name": "pt2", 00:18:43.846 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:43.846 "is_configured": true, 00:18:43.846 "data_offset": 2048, 00:18:43.846 "data_size": 63488 00:18:43.846 }, 00:18:43.846 { 00:18:43.846 "name": "pt3", 00:18:43.846 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:43.846 "is_configured": true, 00:18:43.846 "data_offset": 2048, 00:18:43.846 "data_size": 63488 00:18:43.846 } 00:18:43.846 ] 00:18:43.846 }' 00:18:43.846 00:38:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.846 00:38:17 -- common/autotest_common.sh@10 -- # set +x 00:18:44.423 00:38:17 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:44.423 00:38:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:44.731 [2024-04-27 00:38:18.184623] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.731 00:38:18 -- bdev/bdev_raid.sh@430 -- # '[' 98cde73b-cd02-4a5e-b204-9ca27972ea11 '!=' 98cde73b-cd02-4a5e-b204-9ca27972ea11 ']' 00:18:44.731 00:38:18 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:44.731 00:38:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:44.731 00:38:18 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:44.732 00:38:18 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:44.990 [2024-04-27 00:38:18.444442] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.990 00:38:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.248 00:38:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.248 "name": "raid_bdev1", 00:18:45.248 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:45.248 "strip_size_kb": 0, 00:18:45.248 "state": "online", 00:18:45.248 "raid_level": "raid1", 00:18:45.248 "superblock": true, 00:18:45.248 "num_base_bdevs": 3, 00:18:45.248 "num_base_bdevs_discovered": 2, 00:18:45.248 "num_base_bdevs_operational": 2, 00:18:45.248 "base_bdevs_list": [ 00:18:45.248 { 00:18:45.248 "name": null, 00:18:45.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.248 "is_configured": false, 00:18:45.248 "data_offset": 2048, 00:18:45.248 "data_size": 63488 00:18:45.248 }, 00:18:45.248 { 00:18:45.248 "name": "pt2", 00:18:45.248 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:45.248 "is_configured": true, 00:18:45.248 "data_offset": 2048, 00:18:45.248 "data_size": 63488 00:18:45.248 }, 00:18:45.248 { 00:18:45.248 "name": "pt3", 00:18:45.248 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:45.248 "is_configured": true, 00:18:45.248 "data_offset": 2048, 00:18:45.248 "data_size": 63488 00:18:45.248 } 00:18:45.248 ] 00:18:45.248 }' 00:18:45.248 00:38:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.248 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:18:45.843 00:38:19 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:46.101 [2024-04-27 00:38:19.580658] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.101 [2024-04-27 00:38:19.580693] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.101 [2024-04-27 00:38:19.580773] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.101 [2024-04-27 00:38:19.580857] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.101 [2024-04-27 00:38:19.580869] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:18:46.101 00:38:19 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.101 00:38:19 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:46.359 00:38:19 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:46.359 00:38:19 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:46.359 00:38:19 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:46.359 00:38:19 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:46.359 00:38:19 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:46.617 00:38:20 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:46.617 00:38:20 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:46.617 00:38:20 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:46.876 00:38:20 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:46.876 00:38:20 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:46.876 00:38:20 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:46.876 00:38:20 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:46.876 00:38:20 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:47.134 [2024-04-27 00:38:20.540838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:47.134 [2024-04-27 00:38:20.540952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.134 [2024-04-27 00:38:20.540995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:47.134 [2024-04-27 00:38:20.541022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.134 [2024-04-27 00:38:20.543654] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.134 [2024-04-27 00:38:20.543723] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:47.134 [2024-04-27 00:38:20.543932] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:47.134 [2024-04-27 00:38:20.544036] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.134 pt2 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.134 00:38:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.391 00:38:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.392 "name": "raid_bdev1", 00:18:47.392 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:47.392 "strip_size_kb": 0, 00:18:47.392 "state": "configuring", 00:18:47.392 "raid_level": "raid1", 00:18:47.392 "superblock": true, 00:18:47.392 "num_base_bdevs": 3, 00:18:47.392 "num_base_bdevs_discovered": 1, 00:18:47.392 "num_base_bdevs_operational": 2, 00:18:47.392 "base_bdevs_list": [ 00:18:47.392 { 00:18:47.392 "name": null, 00:18:47.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.392 "is_configured": false, 00:18:47.392 "data_offset": 2048, 00:18:47.392 "data_size": 63488 00:18:47.392 }, 00:18:47.392 { 00:18:47.392 "name": "pt2", 00:18:47.392 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:47.392 "is_configured": true, 00:18:47.392 "data_offset": 2048, 00:18:47.392 "data_size": 63488 00:18:47.392 }, 00:18:47.392 { 00:18:47.392 "name": null, 00:18:47.392 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:47.392 "is_configured": false, 00:18:47.392 "data_offset": 2048, 00:18:47.392 "data_size": 63488 00:18:47.392 } 00:18:47.392 ] 00:18:47.392 }' 00:18:47.392 00:38:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.392 00:38:20 -- common/autotest_common.sh@10 -- # set +x 00:18:47.956 00:38:21 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:47.956 00:38:21 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:47.956 00:38:21 -- bdev/bdev_raid.sh@462 -- # i=2 00:18:47.956 00:38:21 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:48.214 [2024-04-27 00:38:21.765127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:48.214 [2024-04-27 00:38:21.765251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.214 [2024-04-27 00:38:21.765299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:48.214 [2024-04-27 00:38:21.765328] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.214 [2024-04-27 00:38:21.765844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.214 [2024-04-27 00:38:21.765874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:48.214 [2024-04-27 00:38:21.765997] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:48.214 [2024-04-27 00:38:21.766023] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:48.214 [2024-04-27 00:38:21.766156] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:18:48.214 [2024-04-27 00:38:21.766169] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:48.214 [2024-04-27 00:38:21.766267] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:48.214 [2024-04-27 00:38:21.766629] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:18:48.214 [2024-04-27 00:38:21.766644] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:18:48.214 [2024-04-27 00:38:21.766776] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.214 pt3 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.214 00:38:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.472 00:38:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.472 "name": "raid_bdev1", 00:18:48.472 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:48.472 "strip_size_kb": 0, 00:18:48.472 "state": "online", 00:18:48.472 "raid_level": "raid1", 00:18:48.472 "superblock": true, 00:18:48.472 "num_base_bdevs": 3, 00:18:48.472 "num_base_bdevs_discovered": 2, 00:18:48.472 "num_base_bdevs_operational": 2, 00:18:48.472 "base_bdevs_list": [ 00:18:48.472 { 00:18:48.472 "name": null, 00:18:48.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.472 "is_configured": false, 00:18:48.472 "data_offset": 2048, 00:18:48.472 "data_size": 63488 00:18:48.472 }, 00:18:48.472 { 00:18:48.472 "name": "pt2", 00:18:48.472 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:48.472 "is_configured": true, 00:18:48.472 "data_offset": 2048, 00:18:48.472 "data_size": 63488 00:18:48.472 }, 00:18:48.472 { 00:18:48.472 "name": "pt3", 00:18:48.472 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:48.472 "is_configured": true, 00:18:48.472 "data_offset": 2048, 00:18:48.472 "data_size": 63488 00:18:48.472 } 00:18:48.472 ] 00:18:48.472 }' 00:18:48.472 00:38:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.472 00:38:22 -- common/autotest_common.sh@10 -- # set +x 00:18:49.405 00:38:22 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:18:49.405 00:38:22 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:49.405 [2024-04-27 00:38:22.945383] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.405 [2024-04-27 00:38:22.945419] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.405 [2024-04-27 00:38:22.945511] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.405 [2024-04-27 00:38:22.945577] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.405 [2024-04-27 00:38:22.945588] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:18:49.405 00:38:22 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.405 00:38:22 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:49.662 00:38:23 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:49.662 00:38:23 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:49.662 00:38:23 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:49.920 [2024-04-27 00:38:23.409490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:49.920 [2024-04-27 00:38:23.409609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.920 [2024-04-27 00:38:23.409652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:49.920 [2024-04-27 00:38:23.409682] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.920 [2024-04-27 00:38:23.412167] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.920 [2024-04-27 00:38:23.412231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:49.920 [2024-04-27 00:38:23.412378] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:49.920 [2024-04-27 00:38:23.412425] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:49.920 pt1 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.920 00:38:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.177 00:38:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.177 "name": "raid_bdev1", 00:18:50.177 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:50.177 "strip_size_kb": 0, 00:18:50.177 "state": "configuring", 00:18:50.177 "raid_level": "raid1", 00:18:50.177 "superblock": true, 00:18:50.177 "num_base_bdevs": 3, 00:18:50.177 "num_base_bdevs_discovered": 1, 00:18:50.177 "num_base_bdevs_operational": 3, 00:18:50.177 "base_bdevs_list": [ 00:18:50.177 { 00:18:50.177 "name": "pt1", 00:18:50.177 "uuid": "e32a17b2-f751-5766-af80-389313aea5e8", 00:18:50.177 "is_configured": true, 00:18:50.177 "data_offset": 2048, 00:18:50.177 "data_size": 63488 00:18:50.177 }, 00:18:50.177 { 00:18:50.177 "name": null, 00:18:50.177 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:50.177 "is_configured": false, 00:18:50.177 "data_offset": 2048, 00:18:50.177 "data_size": 63488 00:18:50.177 }, 00:18:50.177 { 00:18:50.177 "name": null, 00:18:50.177 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:50.177 "is_configured": false, 00:18:50.177 "data_offset": 2048, 00:18:50.177 "data_size": 63488 00:18:50.177 } 00:18:50.177 ] 00:18:50.177 }' 00:18:50.177 00:38:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.177 00:38:23 -- common/autotest_common.sh@10 -- # set +x 00:18:50.742 00:38:24 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:50.742 00:38:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:50.742 00:38:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:51.012 00:38:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:51.012 00:38:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:51.012 00:38:24 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:51.274 00:38:24 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:51.274 00:38:24 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:51.274 00:38:24 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:51.274 00:38:24 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:51.531 [2024-04-27 00:38:24.929838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:51.532 [2024-04-27 00:38:24.929969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.532 [2024-04-27 00:38:24.930008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:51.532 [2024-04-27 00:38:24.930036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.532 [2024-04-27 00:38:24.930619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.532 [2024-04-27 00:38:24.930667] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:51.532 [2024-04-27 00:38:24.930832] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:51.532 [2024-04-27 00:38:24.930847] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:51.532 [2024-04-27 00:38:24.930855] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:51.532 [2024-04-27 00:38:24.930928] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:18:51.532 [2024-04-27 00:38:24.931011] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:51.532 pt3 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.532 00:38:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.789 00:38:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.789 "name": "raid_bdev1", 00:18:51.789 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:51.789 "strip_size_kb": 0, 00:18:51.789 "state": "configuring", 00:18:51.789 "raid_level": "raid1", 00:18:51.789 "superblock": true, 00:18:51.789 "num_base_bdevs": 3, 00:18:51.789 "num_base_bdevs_discovered": 1, 00:18:51.789 "num_base_bdevs_operational": 2, 00:18:51.789 "base_bdevs_list": [ 00:18:51.789 { 00:18:51.789 "name": null, 00:18:51.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.789 "is_configured": false, 00:18:51.789 "data_offset": 2048, 00:18:51.789 "data_size": 63488 00:18:51.789 }, 00:18:51.789 { 00:18:51.789 "name": null, 00:18:51.789 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:51.789 "is_configured": false, 00:18:51.789 "data_offset": 2048, 00:18:51.789 "data_size": 63488 00:18:51.789 }, 00:18:51.789 { 00:18:51.789 "name": "pt3", 00:18:51.789 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:51.789 "is_configured": true, 00:18:51.789 "data_offset": 2048, 00:18:51.789 "data_size": 63488 00:18:51.789 } 00:18:51.789 ] 00:18:51.789 }' 00:18:51.789 00:38:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.789 00:38:25 -- common/autotest_common.sh@10 -- # set +x 00:18:52.356 00:38:25 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:52.356 00:38:25 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:52.356 00:38:25 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:52.615 [2024-04-27 00:38:26.022074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:52.615 [2024-04-27 00:38:26.022199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.615 [2024-04-27 00:38:26.022237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:52.615 [2024-04-27 00:38:26.022267] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.615 [2024-04-27 00:38:26.022940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.615 [2024-04-27 00:38:26.022991] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:52.615 [2024-04-27 00:38:26.023109] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:52.615 [2024-04-27 00:38:26.023138] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:52.615 [2024-04-27 00:38:26.023288] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:18:52.615 [2024-04-27 00:38:26.023302] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:52.615 [2024-04-27 00:38:26.023421] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:52.615 [2024-04-27 00:38:26.023811] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:18:52.615 [2024-04-27 00:38:26.023827] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:18:52.615 [2024-04-27 00:38:26.023970] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.615 pt2 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.615 00:38:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.874 00:38:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.874 "name": "raid_bdev1", 00:18:52.874 "uuid": "98cde73b-cd02-4a5e-b204-9ca27972ea11", 00:18:52.874 "strip_size_kb": 0, 00:18:52.874 "state": "online", 00:18:52.874 "raid_level": "raid1", 00:18:52.874 "superblock": true, 00:18:52.874 "num_base_bdevs": 3, 00:18:52.874 "num_base_bdevs_discovered": 2, 00:18:52.874 "num_base_bdevs_operational": 2, 00:18:52.874 "base_bdevs_list": [ 00:18:52.874 { 00:18:52.874 "name": null, 00:18:52.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.874 "is_configured": false, 00:18:52.874 "data_offset": 2048, 00:18:52.874 "data_size": 63488 00:18:52.874 }, 00:18:52.874 { 00:18:52.874 "name": "pt2", 00:18:52.874 "uuid": "855af7de-94ea-57fe-b9ae-fb5994a4c5cc", 00:18:52.874 "is_configured": true, 00:18:52.874 "data_offset": 2048, 00:18:52.874 "data_size": 63488 00:18:52.874 }, 00:18:52.874 { 00:18:52.874 "name": "pt3", 00:18:52.874 "uuid": "1e52fb6b-7ff1-54be-96e4-ec62501828f2", 00:18:52.874 "is_configured": true, 00:18:52.874 "data_offset": 2048, 00:18:52.874 "data_size": 63488 00:18:52.874 } 00:18:52.874 ] 00:18:52.874 }' 00:18:52.874 00:38:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.874 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:18:53.441 00:38:26 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:53.441 00:38:26 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:53.698 [2024-04-27 00:38:27.194589] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:53.698 00:38:27 -- bdev/bdev_raid.sh@506 -- # '[' 98cde73b-cd02-4a5e-b204-9ca27972ea11 '!=' 98cde73b-cd02-4a5e-b204-9ca27972ea11 ']' 00:18:53.698 00:38:27 -- bdev/bdev_raid.sh@511 -- # killprocess 125386 00:18:53.698 00:38:27 -- common/autotest_common.sh@936 -- # '[' -z 125386 ']' 00:18:53.698 00:38:27 -- common/autotest_common.sh@940 -- # kill -0 125386 00:18:53.698 00:38:27 -- common/autotest_common.sh@941 -- # uname 00:18:53.698 00:38:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:53.698 00:38:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 125386 00:18:53.698 00:38:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:53.698 00:38:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:53.698 00:38:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 125386' 00:18:53.698 killing process with pid 125386 00:18:53.698 00:38:27 -- common/autotest_common.sh@955 -- # kill 125386 00:18:53.698 [2024-04-27 00:38:27.236698] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.698 [2024-04-27 00:38:27.236770] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.698 [2024-04-27 00:38:27.236832] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.698 [2024-04-27 00:38:27.236842] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:18:53.699 00:38:27 -- common/autotest_common.sh@960 -- # wait 125386 00:18:53.956 [2024-04-27 00:38:27.451512] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:54.890 ************************************ 00:18:54.890 END TEST raid_superblock_test 00:18:54.890 ************************************ 00:18:54.890 00:38:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:54.890 00:18:54.890 real 0m19.892s 00:18:54.890 user 0m36.608s 00:18:54.890 sys 0m2.256s 00:18:54.890 00:38:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:54.890 00:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:54.890 00:38:28 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:54.890 00:38:28 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:54.890 00:38:28 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:54.890 00:38:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:54.890 00:38:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.890 00:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:55.147 ************************************ 00:18:55.147 START TEST raid_state_function_test 00:18:55.147 ************************************ 00:18:55.147 00:38:28 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 false 00:18:55.147 00:38:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:55.147 00:38:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:55.147 00:38:28 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:55.147 00:38:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:55.147 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:55.147 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.147 00:38:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=126008 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126008' 00:18:55.148 Process raid pid: 126008 00:18:55.148 00:38:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126008 /var/tmp/spdk-raid.sock 00:18:55.148 00:38:28 -- common/autotest_common.sh@817 -- # '[' -z 126008 ']' 00:18:55.148 00:38:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:55.148 00:38:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:55.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:55.148 00:38:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:55.148 00:38:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:55.148 00:38:28 -- common/autotest_common.sh@10 -- # set +x 00:18:55.148 [2024-04-27 00:38:28.586718] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:18:55.148 [2024-04-27 00:38:28.586879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.406 [2024-04-27 00:38:28.740819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.406 [2024-04-27 00:38:28.925446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.664 [2024-04-27 00:38:29.098443] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.229 00:38:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:56.229 00:38:29 -- common/autotest_common.sh@850 -- # return 0 00:18:56.229 00:38:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:56.229 [2024-04-27 00:38:29.807504] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.229 [2024-04-27 00:38:29.807829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.229 [2024-04-27 00:38:29.807989] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.229 [2024-04-27 00:38:29.808086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.229 [2024-04-27 00:38:29.808294] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:56.229 [2024-04-27 00:38:29.808416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:56.229 [2024-04-27 00:38:29.808573] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:56.229 [2024-04-27 00:38:29.808679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.487 00:38:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.745 00:38:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.745 "name": "Existed_Raid", 00:18:56.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.745 "strip_size_kb": 64, 00:18:56.745 "state": "configuring", 00:18:56.745 "raid_level": "raid0", 00:18:56.745 "superblock": false, 00:18:56.745 "num_base_bdevs": 4, 00:18:56.745 "num_base_bdevs_discovered": 0, 00:18:56.745 "num_base_bdevs_operational": 4, 00:18:56.745 "base_bdevs_list": [ 00:18:56.745 { 00:18:56.745 "name": "BaseBdev1", 00:18:56.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.745 "is_configured": false, 00:18:56.745 "data_offset": 0, 00:18:56.745 "data_size": 0 00:18:56.745 }, 00:18:56.745 { 00:18:56.745 "name": "BaseBdev2", 00:18:56.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.745 "is_configured": false, 00:18:56.745 "data_offset": 0, 00:18:56.745 "data_size": 0 00:18:56.745 }, 00:18:56.745 { 00:18:56.745 "name": "BaseBdev3", 00:18:56.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.745 "is_configured": false, 00:18:56.746 "data_offset": 0, 00:18:56.746 "data_size": 0 00:18:56.746 }, 00:18:56.746 { 00:18:56.746 "name": "BaseBdev4", 00:18:56.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.746 "is_configured": false, 00:18:56.746 "data_offset": 0, 00:18:56.746 "data_size": 0 00:18:56.746 } 00:18:56.746 ] 00:18:56.746 }' 00:18:56.746 00:38:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.746 00:38:30 -- common/autotest_common.sh@10 -- # set +x 00:18:57.312 00:38:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:57.575 [2024-04-27 00:38:30.963706] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:57.575 [2024-04-27 00:38:30.963976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:18:57.575 00:38:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:57.833 [2024-04-27 00:38:31.171763] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:57.833 [2024-04-27 00:38:31.172002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:57.833 [2024-04-27 00:38:31.172103] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.833 [2024-04-27 00:38:31.172167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.833 [2024-04-27 00:38:31.172280] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:57.833 [2024-04-27 00:38:31.172373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:57.834 [2024-04-27 00:38:31.172600] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:57.834 [2024-04-27 00:38:31.172667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:57.834 00:38:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:57.834 [2024-04-27 00:38:31.406511] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.834 BaseBdev1 00:18:58.093 00:38:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:58.093 00:38:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:58.093 00:38:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:58.093 00:38:31 -- common/autotest_common.sh@887 -- # local i 00:18:58.093 00:38:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:58.093 00:38:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:58.093 00:38:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.093 00:38:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:58.352 [ 00:18:58.352 { 00:18:58.352 "name": "BaseBdev1", 00:18:58.352 "aliases": [ 00:18:58.352 "3b66b0a2-8a52-4ba3-b784-430223bc9134" 00:18:58.352 ], 00:18:58.352 "product_name": "Malloc disk", 00:18:58.352 "block_size": 512, 00:18:58.352 "num_blocks": 65536, 00:18:58.352 "uuid": "3b66b0a2-8a52-4ba3-b784-430223bc9134", 00:18:58.352 "assigned_rate_limits": { 00:18:58.352 "rw_ios_per_sec": 0, 00:18:58.352 "rw_mbytes_per_sec": 0, 00:18:58.352 "r_mbytes_per_sec": 0, 00:18:58.352 "w_mbytes_per_sec": 0 00:18:58.352 }, 00:18:58.352 "claimed": true, 00:18:58.352 "claim_type": "exclusive_write", 00:18:58.352 "zoned": false, 00:18:58.352 "supported_io_types": { 00:18:58.352 "read": true, 00:18:58.352 "write": true, 00:18:58.352 "unmap": true, 00:18:58.352 "write_zeroes": true, 00:18:58.352 "flush": true, 00:18:58.352 "reset": true, 00:18:58.352 "compare": false, 00:18:58.352 "compare_and_write": false, 00:18:58.352 "abort": true, 00:18:58.352 "nvme_admin": false, 00:18:58.352 "nvme_io": false 00:18:58.352 }, 00:18:58.352 "memory_domains": [ 00:18:58.352 { 00:18:58.352 "dma_device_id": "system", 00:18:58.352 "dma_device_type": 1 00:18:58.352 }, 00:18:58.352 { 00:18:58.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.352 "dma_device_type": 2 00:18:58.352 } 00:18:58.352 ], 00:18:58.352 "driver_specific": {} 00:18:58.352 } 00:18:58.352 ] 00:18:58.352 00:38:31 -- common/autotest_common.sh@893 -- # return 0 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.352 00:38:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.611 00:38:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.611 "name": "Existed_Raid", 00:18:58.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.611 "strip_size_kb": 64, 00:18:58.611 "state": "configuring", 00:18:58.611 "raid_level": "raid0", 00:18:58.611 "superblock": false, 00:18:58.611 "num_base_bdevs": 4, 00:18:58.611 "num_base_bdevs_discovered": 1, 00:18:58.611 "num_base_bdevs_operational": 4, 00:18:58.611 "base_bdevs_list": [ 00:18:58.611 { 00:18:58.611 "name": "BaseBdev1", 00:18:58.611 "uuid": "3b66b0a2-8a52-4ba3-b784-430223bc9134", 00:18:58.611 "is_configured": true, 00:18:58.611 "data_offset": 0, 00:18:58.611 "data_size": 65536 00:18:58.611 }, 00:18:58.611 { 00:18:58.611 "name": "BaseBdev2", 00:18:58.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.611 "is_configured": false, 00:18:58.611 "data_offset": 0, 00:18:58.611 "data_size": 0 00:18:58.611 }, 00:18:58.611 { 00:18:58.611 "name": "BaseBdev3", 00:18:58.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.611 "is_configured": false, 00:18:58.611 "data_offset": 0, 00:18:58.611 "data_size": 0 00:18:58.611 }, 00:18:58.611 { 00:18:58.611 "name": "BaseBdev4", 00:18:58.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.611 "is_configured": false, 00:18:58.611 "data_offset": 0, 00:18:58.611 "data_size": 0 00:18:58.611 } 00:18:58.611 ] 00:18:58.611 }' 00:18:58.611 00:38:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.611 00:38:32 -- common/autotest_common.sh@10 -- # set +x 00:18:59.179 00:38:32 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:59.438 [2024-04-27 00:38:32.951074] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.438 [2024-04-27 00:38:32.951317] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:18:59.438 00:38:32 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:59.438 00:38:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:59.696 [2024-04-27 00:38:33.215132] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.696 [2024-04-27 00:38:33.216949] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.696 [2024-04-27 00:38:33.217174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.696 [2024-04-27 00:38:33.217288] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.696 [2024-04-27 00:38:33.217351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.696 [2024-04-27 00:38:33.217442] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:59.696 [2024-04-27 00:38:33.217511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.696 00:38:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.954 00:38:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.954 "name": "Existed_Raid", 00:18:59.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.954 "strip_size_kb": 64, 00:18:59.954 "state": "configuring", 00:18:59.954 "raid_level": "raid0", 00:18:59.954 "superblock": false, 00:18:59.954 "num_base_bdevs": 4, 00:18:59.954 "num_base_bdevs_discovered": 1, 00:18:59.954 "num_base_bdevs_operational": 4, 00:18:59.954 "base_bdevs_list": [ 00:18:59.954 { 00:18:59.954 "name": "BaseBdev1", 00:18:59.954 "uuid": "3b66b0a2-8a52-4ba3-b784-430223bc9134", 00:18:59.954 "is_configured": true, 00:18:59.954 "data_offset": 0, 00:18:59.954 "data_size": 65536 00:18:59.954 }, 00:18:59.954 { 00:18:59.954 "name": "BaseBdev2", 00:18:59.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.954 "is_configured": false, 00:18:59.954 "data_offset": 0, 00:18:59.954 "data_size": 0 00:18:59.954 }, 00:18:59.954 { 00:18:59.954 "name": "BaseBdev3", 00:18:59.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.954 "is_configured": false, 00:18:59.954 "data_offset": 0, 00:18:59.954 "data_size": 0 00:18:59.954 }, 00:18:59.954 { 00:18:59.954 "name": "BaseBdev4", 00:18:59.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.954 "is_configured": false, 00:18:59.954 "data_offset": 0, 00:18:59.954 "data_size": 0 00:18:59.954 } 00:18:59.954 ] 00:18:59.954 }' 00:18:59.954 00:38:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.954 00:38:33 -- common/autotest_common.sh@10 -- # set +x 00:19:00.520 00:38:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:01.086 [2024-04-27 00:38:34.408491] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.086 BaseBdev2 00:19:01.086 00:38:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:01.086 00:38:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:01.086 00:38:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:01.086 00:38:34 -- common/autotest_common.sh@887 -- # local i 00:19:01.086 00:38:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:01.086 00:38:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:01.086 00:38:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:01.345 00:38:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:01.345 [ 00:19:01.345 { 00:19:01.345 "name": "BaseBdev2", 00:19:01.345 "aliases": [ 00:19:01.345 "e13ad34b-aa8a-4f2f-babd-db8e41a92502" 00:19:01.345 ], 00:19:01.345 "product_name": "Malloc disk", 00:19:01.345 "block_size": 512, 00:19:01.345 "num_blocks": 65536, 00:19:01.345 "uuid": "e13ad34b-aa8a-4f2f-babd-db8e41a92502", 00:19:01.345 "assigned_rate_limits": { 00:19:01.345 "rw_ios_per_sec": 0, 00:19:01.345 "rw_mbytes_per_sec": 0, 00:19:01.345 "r_mbytes_per_sec": 0, 00:19:01.345 "w_mbytes_per_sec": 0 00:19:01.345 }, 00:19:01.345 "claimed": true, 00:19:01.345 "claim_type": "exclusive_write", 00:19:01.345 "zoned": false, 00:19:01.345 "supported_io_types": { 00:19:01.345 "read": true, 00:19:01.345 "write": true, 00:19:01.345 "unmap": true, 00:19:01.345 "write_zeroes": true, 00:19:01.345 "flush": true, 00:19:01.345 "reset": true, 00:19:01.345 "compare": false, 00:19:01.345 "compare_and_write": false, 00:19:01.345 "abort": true, 00:19:01.345 "nvme_admin": false, 00:19:01.345 "nvme_io": false 00:19:01.345 }, 00:19:01.345 "memory_domains": [ 00:19:01.345 { 00:19:01.345 "dma_device_id": "system", 00:19:01.345 "dma_device_type": 1 00:19:01.345 }, 00:19:01.345 { 00:19:01.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.345 "dma_device_type": 2 00:19:01.345 } 00:19:01.345 ], 00:19:01.345 "driver_specific": {} 00:19:01.345 } 00:19:01.345 ] 00:19:01.604 00:38:34 -- common/autotest_common.sh@893 -- # return 0 00:19:01.604 00:38:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:01.604 00:38:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:01.604 00:38:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.605 00:38:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.605 00:38:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.605 "name": "Existed_Raid", 00:19:01.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.605 "strip_size_kb": 64, 00:19:01.605 "state": "configuring", 00:19:01.605 "raid_level": "raid0", 00:19:01.605 "superblock": false, 00:19:01.605 "num_base_bdevs": 4, 00:19:01.605 "num_base_bdevs_discovered": 2, 00:19:01.605 "num_base_bdevs_operational": 4, 00:19:01.605 "base_bdevs_list": [ 00:19:01.605 { 00:19:01.605 "name": "BaseBdev1", 00:19:01.605 "uuid": "3b66b0a2-8a52-4ba3-b784-430223bc9134", 00:19:01.605 "is_configured": true, 00:19:01.605 "data_offset": 0, 00:19:01.605 "data_size": 65536 00:19:01.605 }, 00:19:01.605 { 00:19:01.605 "name": "BaseBdev2", 00:19:01.605 "uuid": "e13ad34b-aa8a-4f2f-babd-db8e41a92502", 00:19:01.605 "is_configured": true, 00:19:01.605 "data_offset": 0, 00:19:01.605 "data_size": 65536 00:19:01.605 }, 00:19:01.605 { 00:19:01.605 "name": "BaseBdev3", 00:19:01.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.605 "is_configured": false, 00:19:01.605 "data_offset": 0, 00:19:01.605 "data_size": 0 00:19:01.605 }, 00:19:01.605 { 00:19:01.605 "name": "BaseBdev4", 00:19:01.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.605 "is_configured": false, 00:19:01.605 "data_offset": 0, 00:19:01.605 "data_size": 0 00:19:01.605 } 00:19:01.605 ] 00:19:01.605 }' 00:19:01.605 00:38:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.605 00:38:35 -- common/autotest_common.sh@10 -- # set +x 00:19:02.541 00:38:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:02.541 [2024-04-27 00:38:36.027139] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:02.541 BaseBdev3 00:19:02.541 00:38:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:02.541 00:38:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:02.541 00:38:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:02.541 00:38:36 -- common/autotest_common.sh@887 -- # local i 00:19:02.541 00:38:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:02.541 00:38:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:02.541 00:38:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.800 00:38:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:03.059 [ 00:19:03.059 { 00:19:03.059 "name": "BaseBdev3", 00:19:03.059 "aliases": [ 00:19:03.059 "0f71a54f-4a44-44aa-a39a-abae0881b344" 00:19:03.059 ], 00:19:03.059 "product_name": "Malloc disk", 00:19:03.059 "block_size": 512, 00:19:03.059 "num_blocks": 65536, 00:19:03.059 "uuid": "0f71a54f-4a44-44aa-a39a-abae0881b344", 00:19:03.059 "assigned_rate_limits": { 00:19:03.059 "rw_ios_per_sec": 0, 00:19:03.059 "rw_mbytes_per_sec": 0, 00:19:03.059 "r_mbytes_per_sec": 0, 00:19:03.059 "w_mbytes_per_sec": 0 00:19:03.059 }, 00:19:03.059 "claimed": true, 00:19:03.059 "claim_type": "exclusive_write", 00:19:03.059 "zoned": false, 00:19:03.059 "supported_io_types": { 00:19:03.059 "read": true, 00:19:03.059 "write": true, 00:19:03.059 "unmap": true, 00:19:03.059 "write_zeroes": true, 00:19:03.059 "flush": true, 00:19:03.059 "reset": true, 00:19:03.059 "compare": false, 00:19:03.059 "compare_and_write": false, 00:19:03.059 "abort": true, 00:19:03.059 "nvme_admin": false, 00:19:03.059 "nvme_io": false 00:19:03.059 }, 00:19:03.059 "memory_domains": [ 00:19:03.059 { 00:19:03.059 "dma_device_id": "system", 00:19:03.059 "dma_device_type": 1 00:19:03.059 }, 00:19:03.059 { 00:19:03.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.059 "dma_device_type": 2 00:19:03.059 } 00:19:03.059 ], 00:19:03.059 "driver_specific": {} 00:19:03.059 } 00:19:03.059 ] 00:19:03.059 00:38:36 -- common/autotest_common.sh@893 -- # return 0 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.059 00:38:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.319 00:38:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.319 "name": "Existed_Raid", 00:19:03.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.319 "strip_size_kb": 64, 00:19:03.319 "state": "configuring", 00:19:03.319 "raid_level": "raid0", 00:19:03.319 "superblock": false, 00:19:03.319 "num_base_bdevs": 4, 00:19:03.319 "num_base_bdevs_discovered": 3, 00:19:03.319 "num_base_bdevs_operational": 4, 00:19:03.319 "base_bdevs_list": [ 00:19:03.319 { 00:19:03.319 "name": "BaseBdev1", 00:19:03.319 "uuid": "3b66b0a2-8a52-4ba3-b784-430223bc9134", 00:19:03.319 "is_configured": true, 00:19:03.319 "data_offset": 0, 00:19:03.319 "data_size": 65536 00:19:03.319 }, 00:19:03.319 { 00:19:03.319 "name": "BaseBdev2", 00:19:03.319 "uuid": "e13ad34b-aa8a-4f2f-babd-db8e41a92502", 00:19:03.319 "is_configured": true, 00:19:03.319 "data_offset": 0, 00:19:03.319 "data_size": 65536 00:19:03.319 }, 00:19:03.319 { 00:19:03.319 "name": "BaseBdev3", 00:19:03.319 "uuid": "0f71a54f-4a44-44aa-a39a-abae0881b344", 00:19:03.319 "is_configured": true, 00:19:03.319 "data_offset": 0, 00:19:03.319 "data_size": 65536 00:19:03.319 }, 00:19:03.319 { 00:19:03.319 "name": "BaseBdev4", 00:19:03.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.319 "is_configured": false, 00:19:03.319 "data_offset": 0, 00:19:03.319 "data_size": 0 00:19:03.319 } 00:19:03.319 ] 00:19:03.319 }' 00:19:03.319 00:38:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.319 00:38:36 -- common/autotest_common.sh@10 -- # set +x 00:19:03.923 00:38:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:04.182 [2024-04-27 00:38:37.711433] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:04.182 [2024-04-27 00:38:37.711496] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:04.182 [2024-04-27 00:38:37.711506] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:04.182 [2024-04-27 00:38:37.711625] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:04.182 [2024-04-27 00:38:37.712030] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:04.182 [2024-04-27 00:38:37.712056] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:19:04.182 [2024-04-27 00:38:37.712330] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.182 BaseBdev4 00:19:04.182 00:38:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:04.182 00:38:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:19:04.182 00:38:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:04.182 00:38:37 -- common/autotest_common.sh@887 -- # local i 00:19:04.182 00:38:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:04.182 00:38:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:04.182 00:38:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.441 00:38:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:04.699 [ 00:19:04.699 { 00:19:04.699 "name": "BaseBdev4", 00:19:04.699 "aliases": [ 00:19:04.699 "fcb25b0d-c795-4cb8-a535-a4ddd84a18d3" 00:19:04.699 ], 00:19:04.699 "product_name": "Malloc disk", 00:19:04.699 "block_size": 512, 00:19:04.700 "num_blocks": 65536, 00:19:04.700 "uuid": "fcb25b0d-c795-4cb8-a535-a4ddd84a18d3", 00:19:04.700 "assigned_rate_limits": { 00:19:04.700 "rw_ios_per_sec": 0, 00:19:04.700 "rw_mbytes_per_sec": 0, 00:19:04.700 "r_mbytes_per_sec": 0, 00:19:04.700 "w_mbytes_per_sec": 0 00:19:04.700 }, 00:19:04.700 "claimed": true, 00:19:04.700 "claim_type": "exclusive_write", 00:19:04.700 "zoned": false, 00:19:04.700 "supported_io_types": { 00:19:04.700 "read": true, 00:19:04.700 "write": true, 00:19:04.700 "unmap": true, 00:19:04.700 "write_zeroes": true, 00:19:04.700 "flush": true, 00:19:04.700 "reset": true, 00:19:04.700 "compare": false, 00:19:04.700 "compare_and_write": false, 00:19:04.700 "abort": true, 00:19:04.700 "nvme_admin": false, 00:19:04.700 "nvme_io": false 00:19:04.700 }, 00:19:04.700 "memory_domains": [ 00:19:04.700 { 00:19:04.700 "dma_device_id": "system", 00:19:04.700 "dma_device_type": 1 00:19:04.700 }, 00:19:04.700 { 00:19:04.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.700 "dma_device_type": 2 00:19:04.700 } 00:19:04.700 ], 00:19:04.700 "driver_specific": {} 00:19:04.700 } 00:19:04.700 ] 00:19:04.700 00:38:38 -- common/autotest_common.sh@893 -- # return 0 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.700 00:38:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.958 00:38:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:04.958 "name": "Existed_Raid", 00:19:04.958 "uuid": "f0677a65-f78e-4f0d-bb98-28e0993f0dc0", 00:19:04.958 "strip_size_kb": 64, 00:19:04.958 "state": "online", 00:19:04.958 "raid_level": "raid0", 00:19:04.958 "superblock": false, 00:19:04.958 "num_base_bdevs": 4, 00:19:04.958 "num_base_bdevs_discovered": 4, 00:19:04.958 "num_base_bdevs_operational": 4, 00:19:04.958 "base_bdevs_list": [ 00:19:04.958 { 00:19:04.958 "name": "BaseBdev1", 00:19:04.958 "uuid": "3b66b0a2-8a52-4ba3-b784-430223bc9134", 00:19:04.958 "is_configured": true, 00:19:04.958 "data_offset": 0, 00:19:04.958 "data_size": 65536 00:19:04.958 }, 00:19:04.958 { 00:19:04.958 "name": "BaseBdev2", 00:19:04.958 "uuid": "e13ad34b-aa8a-4f2f-babd-db8e41a92502", 00:19:04.958 "is_configured": true, 00:19:04.958 "data_offset": 0, 00:19:04.958 "data_size": 65536 00:19:04.958 }, 00:19:04.958 { 00:19:04.958 "name": "BaseBdev3", 00:19:04.958 "uuid": "0f71a54f-4a44-44aa-a39a-abae0881b344", 00:19:04.958 "is_configured": true, 00:19:04.958 "data_offset": 0, 00:19:04.958 "data_size": 65536 00:19:04.958 }, 00:19:04.958 { 00:19:04.958 "name": "BaseBdev4", 00:19:04.958 "uuid": "fcb25b0d-c795-4cb8-a535-a4ddd84a18d3", 00:19:04.958 "is_configured": true, 00:19:04.958 "data_offset": 0, 00:19:04.958 "data_size": 65536 00:19:04.958 } 00:19:04.959 ] 00:19:04.959 }' 00:19:04.959 00:38:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:04.959 00:38:38 -- common/autotest_common.sh@10 -- # set +x 00:19:05.526 00:38:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:05.785 [2024-04-27 00:38:39.359955] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:05.785 [2024-04-27 00:38:39.359989] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:05.785 [2024-04-27 00:38:39.360067] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.044 00:38:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.303 00:38:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.303 "name": "Existed_Raid", 00:19:06.303 "uuid": "f0677a65-f78e-4f0d-bb98-28e0993f0dc0", 00:19:06.303 "strip_size_kb": 64, 00:19:06.303 "state": "offline", 00:19:06.303 "raid_level": "raid0", 00:19:06.303 "superblock": false, 00:19:06.303 "num_base_bdevs": 4, 00:19:06.303 "num_base_bdevs_discovered": 3, 00:19:06.303 "num_base_bdevs_operational": 3, 00:19:06.303 "base_bdevs_list": [ 00:19:06.303 { 00:19:06.303 "name": null, 00:19:06.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.303 "is_configured": false, 00:19:06.303 "data_offset": 0, 00:19:06.303 "data_size": 65536 00:19:06.303 }, 00:19:06.303 { 00:19:06.303 "name": "BaseBdev2", 00:19:06.303 "uuid": "e13ad34b-aa8a-4f2f-babd-db8e41a92502", 00:19:06.303 "is_configured": true, 00:19:06.303 "data_offset": 0, 00:19:06.303 "data_size": 65536 00:19:06.303 }, 00:19:06.303 { 00:19:06.303 "name": "BaseBdev3", 00:19:06.303 "uuid": "0f71a54f-4a44-44aa-a39a-abae0881b344", 00:19:06.303 "is_configured": true, 00:19:06.303 "data_offset": 0, 00:19:06.303 "data_size": 65536 00:19:06.303 }, 00:19:06.303 { 00:19:06.303 "name": "BaseBdev4", 00:19:06.303 "uuid": "fcb25b0d-c795-4cb8-a535-a4ddd84a18d3", 00:19:06.303 "is_configured": true, 00:19:06.303 "data_offset": 0, 00:19:06.303 "data_size": 65536 00:19:06.303 } 00:19:06.303 ] 00:19:06.303 }' 00:19:06.303 00:38:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.303 00:38:39 -- common/autotest_common.sh@10 -- # set +x 00:19:06.870 00:38:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:06.870 00:38:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:06.870 00:38:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.870 00:38:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.128 00:38:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.128 00:38:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.128 00:38:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:07.387 [2024-04-27 00:38:40.749939] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:07.387 00:38:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:07.387 00:38:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.387 00:38:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.387 00:38:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:07.646 00:38:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:07.646 00:38:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:07.646 00:38:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:07.905 [2024-04-27 00:38:41.314561] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:07.905 00:38:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:07.905 00:38:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:07.905 00:38:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.905 00:38:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:08.164 00:38:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:08.164 00:38:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:08.164 00:38:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:08.422 [2024-04-27 00:38:41.890611] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:08.422 [2024-04-27 00:38:41.890682] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:19:08.423 00:38:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:08.423 00:38:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:08.423 00:38:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:08.423 00:38:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.681 00:38:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:08.681 00:38:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:08.681 00:38:42 -- bdev/bdev_raid.sh@287 -- # killprocess 126008 00:19:08.681 00:38:42 -- common/autotest_common.sh@936 -- # '[' -z 126008 ']' 00:19:08.681 00:38:42 -- common/autotest_common.sh@940 -- # kill -0 126008 00:19:08.681 00:38:42 -- common/autotest_common.sh@941 -- # uname 00:19:08.681 00:38:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:08.681 00:38:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126008 00:19:08.681 killing process with pid 126008 00:19:08.681 00:38:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:08.681 00:38:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:08.681 00:38:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126008' 00:19:08.681 00:38:42 -- common/autotest_common.sh@955 -- # kill 126008 00:19:08.681 [2024-04-27 00:38:42.228880] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.681 00:38:42 -- common/autotest_common.sh@960 -- # wait 126008 00:19:08.681 [2024-04-27 00:38:42.228992] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.056 ************************************ 00:19:10.056 END TEST raid_state_function_test 00:19:10.056 ************************************ 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:10.056 00:19:10.056 real 0m14.688s 00:19:10.056 user 0m26.220s 00:19:10.056 sys 0m1.781s 00:19:10.056 00:38:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:10.056 00:38:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:19:10.056 00:38:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:10.056 00:38:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.056 00:38:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 ************************************ 00:19:10.056 START TEST raid_state_function_test_sb 00:19:10.056 ************************************ 00:19:10.056 00:38:43 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid0 4 true 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:10.056 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=126450 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126450' 00:19:10.057 Process raid pid: 126450 00:19:10.057 00:38:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126450 /var/tmp/spdk-raid.sock 00:19:10.057 00:38:43 -- common/autotest_common.sh@817 -- # '[' -z 126450 ']' 00:19:10.057 00:38:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:10.057 00:38:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:10.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:10.057 00:38:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:10.057 00:38:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:10.057 00:38:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.057 [2024-04-27 00:38:43.372993] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:10.057 [2024-04-27 00:38:43.373173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.057 [2024-04-27 00:38:43.535208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.323 [2024-04-27 00:38:43.720179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.323 [2024-04-27 00:38:43.909928] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.888 00:38:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:10.888 00:38:44 -- common/autotest_common.sh@850 -- # return 0 00:19:10.888 00:38:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:11.146 [2024-04-27 00:38:44.578138] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.146 [2024-04-27 00:38:44.578224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.146 [2024-04-27 00:38:44.578253] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.146 [2024-04-27 00:38:44.578275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.146 [2024-04-27 00:38:44.578283] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.146 [2024-04-27 00:38:44.578319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.146 [2024-04-27 00:38:44.578328] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:11.146 [2024-04-27 00:38:44.578349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.146 00:38:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.404 00:38:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.404 "name": "Existed_Raid", 00:19:11.404 "uuid": "d4b95dd2-ddf5-480c-8622-2be72d3ff7c0", 00:19:11.404 "strip_size_kb": 64, 00:19:11.404 "state": "configuring", 00:19:11.404 "raid_level": "raid0", 00:19:11.404 "superblock": true, 00:19:11.404 "num_base_bdevs": 4, 00:19:11.404 "num_base_bdevs_discovered": 0, 00:19:11.404 "num_base_bdevs_operational": 4, 00:19:11.404 "base_bdevs_list": [ 00:19:11.404 { 00:19:11.404 "name": "BaseBdev1", 00:19:11.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.404 "is_configured": false, 00:19:11.404 "data_offset": 0, 00:19:11.404 "data_size": 0 00:19:11.404 }, 00:19:11.404 { 00:19:11.404 "name": "BaseBdev2", 00:19:11.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.404 "is_configured": false, 00:19:11.404 "data_offset": 0, 00:19:11.404 "data_size": 0 00:19:11.404 }, 00:19:11.404 { 00:19:11.404 "name": "BaseBdev3", 00:19:11.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.404 "is_configured": false, 00:19:11.404 "data_offset": 0, 00:19:11.404 "data_size": 0 00:19:11.404 }, 00:19:11.404 { 00:19:11.404 "name": "BaseBdev4", 00:19:11.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.404 "is_configured": false, 00:19:11.404 "data_offset": 0, 00:19:11.404 "data_size": 0 00:19:11.404 } 00:19:11.404 ] 00:19:11.404 }' 00:19:11.404 00:38:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.404 00:38:44 -- common/autotest_common.sh@10 -- # set +x 00:19:11.969 00:38:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:12.227 [2024-04-27 00:38:45.646245] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.227 [2024-04-27 00:38:45.646304] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:19:12.227 00:38:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:12.485 [2024-04-27 00:38:45.874332] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.485 [2024-04-27 00:38:45.874419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.485 [2024-04-27 00:38:45.874431] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.485 [2024-04-27 00:38:45.874455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.485 [2024-04-27 00:38:45.874463] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.485 [2024-04-27 00:38:45.874499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.485 [2024-04-27 00:38:45.874506] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:12.485 [2024-04-27 00:38:45.874528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:12.485 00:38:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.743 [2024-04-27 00:38:46.117684] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.743 BaseBdev1 00:19:12.743 00:38:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:12.743 00:38:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:12.743 00:38:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:12.743 00:38:46 -- common/autotest_common.sh@887 -- # local i 00:19:12.743 00:38:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:12.743 00:38:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:12.743 00:38:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.001 00:38:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:13.260 [ 00:19:13.260 { 00:19:13.260 "name": "BaseBdev1", 00:19:13.260 "aliases": [ 00:19:13.260 "c63407fc-847b-4dda-8e88-c89e0a10d060" 00:19:13.260 ], 00:19:13.260 "product_name": "Malloc disk", 00:19:13.260 "block_size": 512, 00:19:13.260 "num_blocks": 65536, 00:19:13.260 "uuid": "c63407fc-847b-4dda-8e88-c89e0a10d060", 00:19:13.260 "assigned_rate_limits": { 00:19:13.260 "rw_ios_per_sec": 0, 00:19:13.260 "rw_mbytes_per_sec": 0, 00:19:13.260 "r_mbytes_per_sec": 0, 00:19:13.260 "w_mbytes_per_sec": 0 00:19:13.260 }, 00:19:13.260 "claimed": true, 00:19:13.260 "claim_type": "exclusive_write", 00:19:13.260 "zoned": false, 00:19:13.260 "supported_io_types": { 00:19:13.260 "read": true, 00:19:13.260 "write": true, 00:19:13.260 "unmap": true, 00:19:13.260 "write_zeroes": true, 00:19:13.260 "flush": true, 00:19:13.260 "reset": true, 00:19:13.260 "compare": false, 00:19:13.260 "compare_and_write": false, 00:19:13.260 "abort": true, 00:19:13.260 "nvme_admin": false, 00:19:13.260 "nvme_io": false 00:19:13.260 }, 00:19:13.260 "memory_domains": [ 00:19:13.260 { 00:19:13.260 "dma_device_id": "system", 00:19:13.260 "dma_device_type": 1 00:19:13.260 }, 00:19:13.260 { 00:19:13.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.260 "dma_device_type": 2 00:19:13.260 } 00:19:13.260 ], 00:19:13.260 "driver_specific": {} 00:19:13.260 } 00:19:13.260 ] 00:19:13.260 00:38:46 -- common/autotest_common.sh@893 -- # return 0 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.260 "name": "Existed_Raid", 00:19:13.260 "uuid": "75ef59b2-df03-4657-9e04-1d58475f67f8", 00:19:13.260 "strip_size_kb": 64, 00:19:13.260 "state": "configuring", 00:19:13.260 "raid_level": "raid0", 00:19:13.260 "superblock": true, 00:19:13.260 "num_base_bdevs": 4, 00:19:13.260 "num_base_bdevs_discovered": 1, 00:19:13.260 "num_base_bdevs_operational": 4, 00:19:13.260 "base_bdevs_list": [ 00:19:13.260 { 00:19:13.260 "name": "BaseBdev1", 00:19:13.260 "uuid": "c63407fc-847b-4dda-8e88-c89e0a10d060", 00:19:13.260 "is_configured": true, 00:19:13.260 "data_offset": 2048, 00:19:13.260 "data_size": 63488 00:19:13.260 }, 00:19:13.260 { 00:19:13.260 "name": "BaseBdev2", 00:19:13.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.260 "is_configured": false, 00:19:13.260 "data_offset": 0, 00:19:13.260 "data_size": 0 00:19:13.260 }, 00:19:13.260 { 00:19:13.260 "name": "BaseBdev3", 00:19:13.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.260 "is_configured": false, 00:19:13.260 "data_offset": 0, 00:19:13.260 "data_size": 0 00:19:13.260 }, 00:19:13.260 { 00:19:13.260 "name": "BaseBdev4", 00:19:13.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.260 "is_configured": false, 00:19:13.260 "data_offset": 0, 00:19:13.260 "data_size": 0 00:19:13.260 } 00:19:13.260 ] 00:19:13.260 }' 00:19:13.260 00:38:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.260 00:38:46 -- common/autotest_common.sh@10 -- # set +x 00:19:14.197 00:38:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:14.197 [2024-04-27 00:38:47.634095] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.197 [2024-04-27 00:38:47.634172] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:19:14.197 00:38:47 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:14.197 00:38:47 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:14.455 00:38:47 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.714 BaseBdev1 00:19:14.714 00:38:48 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:14.714 00:38:48 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:14.714 00:38:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:14.714 00:38:48 -- common/autotest_common.sh@887 -- # local i 00:19:14.714 00:38:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:14.714 00:38:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:14.714 00:38:48 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:14.973 00:38:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:15.233 [ 00:19:15.233 { 00:19:15.233 "name": "BaseBdev1", 00:19:15.233 "aliases": [ 00:19:15.233 "36520069-d14c-4677-ae0a-1c086dad41f9" 00:19:15.233 ], 00:19:15.233 "product_name": "Malloc disk", 00:19:15.233 "block_size": 512, 00:19:15.233 "num_blocks": 65536, 00:19:15.233 "uuid": "36520069-d14c-4677-ae0a-1c086dad41f9", 00:19:15.233 "assigned_rate_limits": { 00:19:15.233 "rw_ios_per_sec": 0, 00:19:15.233 "rw_mbytes_per_sec": 0, 00:19:15.233 "r_mbytes_per_sec": 0, 00:19:15.233 "w_mbytes_per_sec": 0 00:19:15.233 }, 00:19:15.233 "claimed": false, 00:19:15.233 "zoned": false, 00:19:15.233 "supported_io_types": { 00:19:15.233 "read": true, 00:19:15.233 "write": true, 00:19:15.233 "unmap": true, 00:19:15.233 "write_zeroes": true, 00:19:15.233 "flush": true, 00:19:15.233 "reset": true, 00:19:15.233 "compare": false, 00:19:15.233 "compare_and_write": false, 00:19:15.233 "abort": true, 00:19:15.233 "nvme_admin": false, 00:19:15.233 "nvme_io": false 00:19:15.233 }, 00:19:15.233 "memory_domains": [ 00:19:15.233 { 00:19:15.233 "dma_device_id": "system", 00:19:15.233 "dma_device_type": 1 00:19:15.233 }, 00:19:15.233 { 00:19:15.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.233 "dma_device_type": 2 00:19:15.233 } 00:19:15.233 ], 00:19:15.233 "driver_specific": {} 00:19:15.233 } 00:19:15.233 ] 00:19:15.233 00:38:48 -- common/autotest_common.sh@893 -- # return 0 00:19:15.233 00:38:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:15.493 [2024-04-27 00:38:48.868833] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:15.493 [2024-04-27 00:38:48.870897] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:15.493 [2024-04-27 00:38:48.871026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:15.493 [2024-04-27 00:38:48.871040] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:15.493 [2024-04-27 00:38:48.871067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:15.493 [2024-04-27 00:38:48.871076] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:15.493 [2024-04-27 00:38:48.871094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.493 00:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.752 00:38:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.752 "name": "Existed_Raid", 00:19:15.752 "uuid": "3e81ddc4-1c1c-43bf-b8b1-a48647f7562b", 00:19:15.752 "strip_size_kb": 64, 00:19:15.752 "state": "configuring", 00:19:15.752 "raid_level": "raid0", 00:19:15.752 "superblock": true, 00:19:15.752 "num_base_bdevs": 4, 00:19:15.752 "num_base_bdevs_discovered": 1, 00:19:15.752 "num_base_bdevs_operational": 4, 00:19:15.752 "base_bdevs_list": [ 00:19:15.752 { 00:19:15.752 "name": "BaseBdev1", 00:19:15.752 "uuid": "36520069-d14c-4677-ae0a-1c086dad41f9", 00:19:15.752 "is_configured": true, 00:19:15.752 "data_offset": 2048, 00:19:15.752 "data_size": 63488 00:19:15.752 }, 00:19:15.752 { 00:19:15.752 "name": "BaseBdev2", 00:19:15.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.752 "is_configured": false, 00:19:15.752 "data_offset": 0, 00:19:15.752 "data_size": 0 00:19:15.752 }, 00:19:15.752 { 00:19:15.752 "name": "BaseBdev3", 00:19:15.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.752 "is_configured": false, 00:19:15.752 "data_offset": 0, 00:19:15.752 "data_size": 0 00:19:15.752 }, 00:19:15.752 { 00:19:15.752 "name": "BaseBdev4", 00:19:15.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.752 "is_configured": false, 00:19:15.752 "data_offset": 0, 00:19:15.752 "data_size": 0 00:19:15.752 } 00:19:15.752 ] 00:19:15.752 }' 00:19:15.752 00:38:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.752 00:38:49 -- common/autotest_common.sh@10 -- # set +x 00:19:16.319 00:38:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:16.578 [2024-04-27 00:38:49.973765] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.578 BaseBdev2 00:19:16.578 00:38:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:16.578 00:38:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:16.578 00:38:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:16.578 00:38:49 -- common/autotest_common.sh@887 -- # local i 00:19:16.578 00:38:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:16.578 00:38:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:16.578 00:38:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:16.836 00:38:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:17.096 [ 00:19:17.096 { 00:19:17.096 "name": "BaseBdev2", 00:19:17.096 "aliases": [ 00:19:17.096 "e1063d80-6c6b-4455-a95e-442d267eae21" 00:19:17.096 ], 00:19:17.096 "product_name": "Malloc disk", 00:19:17.096 "block_size": 512, 00:19:17.096 "num_blocks": 65536, 00:19:17.096 "uuid": "e1063d80-6c6b-4455-a95e-442d267eae21", 00:19:17.096 "assigned_rate_limits": { 00:19:17.096 "rw_ios_per_sec": 0, 00:19:17.096 "rw_mbytes_per_sec": 0, 00:19:17.096 "r_mbytes_per_sec": 0, 00:19:17.096 "w_mbytes_per_sec": 0 00:19:17.096 }, 00:19:17.096 "claimed": true, 00:19:17.096 "claim_type": "exclusive_write", 00:19:17.096 "zoned": false, 00:19:17.096 "supported_io_types": { 00:19:17.096 "read": true, 00:19:17.096 "write": true, 00:19:17.096 "unmap": true, 00:19:17.096 "write_zeroes": true, 00:19:17.096 "flush": true, 00:19:17.096 "reset": true, 00:19:17.096 "compare": false, 00:19:17.096 "compare_and_write": false, 00:19:17.096 "abort": true, 00:19:17.096 "nvme_admin": false, 00:19:17.096 "nvme_io": false 00:19:17.096 }, 00:19:17.096 "memory_domains": [ 00:19:17.096 { 00:19:17.096 "dma_device_id": "system", 00:19:17.096 "dma_device_type": 1 00:19:17.096 }, 00:19:17.096 { 00:19:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.096 "dma_device_type": 2 00:19:17.096 } 00:19:17.096 ], 00:19:17.096 "driver_specific": {} 00:19:17.096 } 00:19:17.096 ] 00:19:17.096 00:38:50 -- common/autotest_common.sh@893 -- # return 0 00:19:17.096 00:38:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:17.096 00:38:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:17.096 00:38:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:17.096 00:38:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:17.096 00:38:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:17.097 "name": "Existed_Raid", 00:19:17.097 "uuid": "3e81ddc4-1c1c-43bf-b8b1-a48647f7562b", 00:19:17.097 "strip_size_kb": 64, 00:19:17.097 "state": "configuring", 00:19:17.097 "raid_level": "raid0", 00:19:17.097 "superblock": true, 00:19:17.097 "num_base_bdevs": 4, 00:19:17.097 "num_base_bdevs_discovered": 2, 00:19:17.097 "num_base_bdevs_operational": 4, 00:19:17.097 "base_bdevs_list": [ 00:19:17.097 { 00:19:17.097 "name": "BaseBdev1", 00:19:17.097 "uuid": "36520069-d14c-4677-ae0a-1c086dad41f9", 00:19:17.097 "is_configured": true, 00:19:17.097 "data_offset": 2048, 00:19:17.097 "data_size": 63488 00:19:17.097 }, 00:19:17.097 { 00:19:17.097 "name": "BaseBdev2", 00:19:17.097 "uuid": "e1063d80-6c6b-4455-a95e-442d267eae21", 00:19:17.097 "is_configured": true, 00:19:17.097 "data_offset": 2048, 00:19:17.097 "data_size": 63488 00:19:17.097 }, 00:19:17.097 { 00:19:17.097 "name": "BaseBdev3", 00:19:17.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.097 "is_configured": false, 00:19:17.097 "data_offset": 0, 00:19:17.097 "data_size": 0 00:19:17.097 }, 00:19:17.097 { 00:19:17.097 "name": "BaseBdev4", 00:19:17.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.097 "is_configured": false, 00:19:17.097 "data_offset": 0, 00:19:17.097 "data_size": 0 00:19:17.097 } 00:19:17.097 ] 00:19:17.097 }' 00:19:17.097 00:38:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:17.097 00:38:50 -- common/autotest_common.sh@10 -- # set +x 00:19:18.033 00:38:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:18.033 [2024-04-27 00:38:51.505663] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.033 BaseBdev3 00:19:18.033 00:38:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:18.033 00:38:51 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:18.033 00:38:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:18.033 00:38:51 -- common/autotest_common.sh@887 -- # local i 00:19:18.033 00:38:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:18.033 00:38:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:18.033 00:38:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:18.292 00:38:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:18.551 [ 00:19:18.551 { 00:19:18.551 "name": "BaseBdev3", 00:19:18.551 "aliases": [ 00:19:18.551 "2545cd1b-3b10-4eb4-8965-af1c122c92f1" 00:19:18.551 ], 00:19:18.551 "product_name": "Malloc disk", 00:19:18.551 "block_size": 512, 00:19:18.551 "num_blocks": 65536, 00:19:18.551 "uuid": "2545cd1b-3b10-4eb4-8965-af1c122c92f1", 00:19:18.551 "assigned_rate_limits": { 00:19:18.551 "rw_ios_per_sec": 0, 00:19:18.551 "rw_mbytes_per_sec": 0, 00:19:18.551 "r_mbytes_per_sec": 0, 00:19:18.551 "w_mbytes_per_sec": 0 00:19:18.551 }, 00:19:18.551 "claimed": true, 00:19:18.551 "claim_type": "exclusive_write", 00:19:18.551 "zoned": false, 00:19:18.551 "supported_io_types": { 00:19:18.551 "read": true, 00:19:18.551 "write": true, 00:19:18.551 "unmap": true, 00:19:18.551 "write_zeroes": true, 00:19:18.551 "flush": true, 00:19:18.551 "reset": true, 00:19:18.551 "compare": false, 00:19:18.551 "compare_and_write": false, 00:19:18.551 "abort": true, 00:19:18.551 "nvme_admin": false, 00:19:18.551 "nvme_io": false 00:19:18.551 }, 00:19:18.551 "memory_domains": [ 00:19:18.551 { 00:19:18.551 "dma_device_id": "system", 00:19:18.551 "dma_device_type": 1 00:19:18.551 }, 00:19:18.551 { 00:19:18.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.551 "dma_device_type": 2 00:19:18.551 } 00:19:18.551 ], 00:19:18.551 "driver_specific": {} 00:19:18.551 } 00:19:18.551 ] 00:19:18.551 00:38:52 -- common/autotest_common.sh@893 -- # return 0 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.551 00:38:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.809 00:38:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.809 "name": "Existed_Raid", 00:19:18.809 "uuid": "3e81ddc4-1c1c-43bf-b8b1-a48647f7562b", 00:19:18.809 "strip_size_kb": 64, 00:19:18.809 "state": "configuring", 00:19:18.809 "raid_level": "raid0", 00:19:18.809 "superblock": true, 00:19:18.809 "num_base_bdevs": 4, 00:19:18.809 "num_base_bdevs_discovered": 3, 00:19:18.809 "num_base_bdevs_operational": 4, 00:19:18.809 "base_bdevs_list": [ 00:19:18.809 { 00:19:18.809 "name": "BaseBdev1", 00:19:18.809 "uuid": "36520069-d14c-4677-ae0a-1c086dad41f9", 00:19:18.809 "is_configured": true, 00:19:18.809 "data_offset": 2048, 00:19:18.809 "data_size": 63488 00:19:18.809 }, 00:19:18.809 { 00:19:18.809 "name": "BaseBdev2", 00:19:18.809 "uuid": "e1063d80-6c6b-4455-a95e-442d267eae21", 00:19:18.809 "is_configured": true, 00:19:18.809 "data_offset": 2048, 00:19:18.809 "data_size": 63488 00:19:18.809 }, 00:19:18.809 { 00:19:18.809 "name": "BaseBdev3", 00:19:18.809 "uuid": "2545cd1b-3b10-4eb4-8965-af1c122c92f1", 00:19:18.809 "is_configured": true, 00:19:18.809 "data_offset": 2048, 00:19:18.809 "data_size": 63488 00:19:18.809 }, 00:19:18.809 { 00:19:18.809 "name": "BaseBdev4", 00:19:18.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.809 "is_configured": false, 00:19:18.809 "data_offset": 0, 00:19:18.809 "data_size": 0 00:19:18.809 } 00:19:18.809 ] 00:19:18.809 }' 00:19:18.809 00:38:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.809 00:38:52 -- common/autotest_common.sh@10 -- # set +x 00:19:19.377 00:38:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:19.635 [2024-04-27 00:38:53.142589] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:19.635 [2024-04-27 00:38:53.142865] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:19.635 [2024-04-27 00:38:53.142880] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:19.635 [2024-04-27 00:38:53.143070] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:19.635 [2024-04-27 00:38:53.143494] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:19.635 [2024-04-27 00:38:53.143509] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:19:19.635 [2024-04-27 00:38:53.143649] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.635 BaseBdev4 00:19:19.635 00:38:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:19.635 00:38:53 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:19:19.635 00:38:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:19.635 00:38:53 -- common/autotest_common.sh@887 -- # local i 00:19:19.635 00:38:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:19.635 00:38:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:19.635 00:38:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.893 00:38:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:20.152 [ 00:19:20.152 { 00:19:20.152 "name": "BaseBdev4", 00:19:20.152 "aliases": [ 00:19:20.152 "202672a2-1b17-4799-a1dc-ad381f5bd67b" 00:19:20.152 ], 00:19:20.152 "product_name": "Malloc disk", 00:19:20.152 "block_size": 512, 00:19:20.152 "num_blocks": 65536, 00:19:20.152 "uuid": "202672a2-1b17-4799-a1dc-ad381f5bd67b", 00:19:20.152 "assigned_rate_limits": { 00:19:20.152 "rw_ios_per_sec": 0, 00:19:20.152 "rw_mbytes_per_sec": 0, 00:19:20.152 "r_mbytes_per_sec": 0, 00:19:20.152 "w_mbytes_per_sec": 0 00:19:20.152 }, 00:19:20.152 "claimed": true, 00:19:20.152 "claim_type": "exclusive_write", 00:19:20.152 "zoned": false, 00:19:20.152 "supported_io_types": { 00:19:20.152 "read": true, 00:19:20.152 "write": true, 00:19:20.152 "unmap": true, 00:19:20.152 "write_zeroes": true, 00:19:20.152 "flush": true, 00:19:20.152 "reset": true, 00:19:20.152 "compare": false, 00:19:20.152 "compare_and_write": false, 00:19:20.152 "abort": true, 00:19:20.152 "nvme_admin": false, 00:19:20.152 "nvme_io": false 00:19:20.152 }, 00:19:20.152 "memory_domains": [ 00:19:20.152 { 00:19:20.152 "dma_device_id": "system", 00:19:20.152 "dma_device_type": 1 00:19:20.152 }, 00:19:20.152 { 00:19:20.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.152 "dma_device_type": 2 00:19:20.152 } 00:19:20.152 ], 00:19:20.152 "driver_specific": {} 00:19:20.152 } 00:19:20.152 ] 00:19:20.152 00:38:53 -- common/autotest_common.sh@893 -- # return 0 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.152 00:38:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.410 00:38:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.410 "name": "Existed_Raid", 00:19:20.410 "uuid": "3e81ddc4-1c1c-43bf-b8b1-a48647f7562b", 00:19:20.410 "strip_size_kb": 64, 00:19:20.410 "state": "online", 00:19:20.410 "raid_level": "raid0", 00:19:20.410 "superblock": true, 00:19:20.410 "num_base_bdevs": 4, 00:19:20.410 "num_base_bdevs_discovered": 4, 00:19:20.410 "num_base_bdevs_operational": 4, 00:19:20.410 "base_bdevs_list": [ 00:19:20.410 { 00:19:20.410 "name": "BaseBdev1", 00:19:20.410 "uuid": "36520069-d14c-4677-ae0a-1c086dad41f9", 00:19:20.410 "is_configured": true, 00:19:20.410 "data_offset": 2048, 00:19:20.410 "data_size": 63488 00:19:20.410 }, 00:19:20.410 { 00:19:20.410 "name": "BaseBdev2", 00:19:20.411 "uuid": "e1063d80-6c6b-4455-a95e-442d267eae21", 00:19:20.411 "is_configured": true, 00:19:20.411 "data_offset": 2048, 00:19:20.411 "data_size": 63488 00:19:20.411 }, 00:19:20.411 { 00:19:20.411 "name": "BaseBdev3", 00:19:20.411 "uuid": "2545cd1b-3b10-4eb4-8965-af1c122c92f1", 00:19:20.411 "is_configured": true, 00:19:20.411 "data_offset": 2048, 00:19:20.411 "data_size": 63488 00:19:20.411 }, 00:19:20.411 { 00:19:20.411 "name": "BaseBdev4", 00:19:20.411 "uuid": "202672a2-1b17-4799-a1dc-ad381f5bd67b", 00:19:20.411 "is_configured": true, 00:19:20.411 "data_offset": 2048, 00:19:20.411 "data_size": 63488 00:19:20.411 } 00:19:20.411 ] 00:19:20.411 }' 00:19:20.411 00:38:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.411 00:38:53 -- common/autotest_common.sh@10 -- # set +x 00:19:20.976 00:38:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:21.235 [2024-04-27 00:38:54.627178] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:21.235 [2024-04-27 00:38:54.627215] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.235 [2024-04-27 00:38:54.627333] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.235 00:38:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.494 00:38:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.494 "name": "Existed_Raid", 00:19:21.494 "uuid": "3e81ddc4-1c1c-43bf-b8b1-a48647f7562b", 00:19:21.494 "strip_size_kb": 64, 00:19:21.494 "state": "offline", 00:19:21.494 "raid_level": "raid0", 00:19:21.494 "superblock": true, 00:19:21.494 "num_base_bdevs": 4, 00:19:21.494 "num_base_bdevs_discovered": 3, 00:19:21.494 "num_base_bdevs_operational": 3, 00:19:21.494 "base_bdevs_list": [ 00:19:21.494 { 00:19:21.494 "name": null, 00:19:21.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.494 "is_configured": false, 00:19:21.494 "data_offset": 2048, 00:19:21.494 "data_size": 63488 00:19:21.494 }, 00:19:21.494 { 00:19:21.494 "name": "BaseBdev2", 00:19:21.494 "uuid": "e1063d80-6c6b-4455-a95e-442d267eae21", 00:19:21.494 "is_configured": true, 00:19:21.494 "data_offset": 2048, 00:19:21.494 "data_size": 63488 00:19:21.494 }, 00:19:21.494 { 00:19:21.494 "name": "BaseBdev3", 00:19:21.494 "uuid": "2545cd1b-3b10-4eb4-8965-af1c122c92f1", 00:19:21.494 "is_configured": true, 00:19:21.494 "data_offset": 2048, 00:19:21.494 "data_size": 63488 00:19:21.494 }, 00:19:21.494 { 00:19:21.494 "name": "BaseBdev4", 00:19:21.494 "uuid": "202672a2-1b17-4799-a1dc-ad381f5bd67b", 00:19:21.494 "is_configured": true, 00:19:21.494 "data_offset": 2048, 00:19:21.494 "data_size": 63488 00:19:21.494 } 00:19:21.494 ] 00:19:21.494 }' 00:19:21.494 00:38:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.494 00:38:54 -- common/autotest_common.sh@10 -- # set +x 00:19:22.061 00:38:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:22.061 00:38:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.061 00:38:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.061 00:38:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:22.319 00:38:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:22.319 00:38:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:22.319 00:38:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:22.577 [2024-04-27 00:38:56.097785] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:22.846 00:38:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:22.846 00:38:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:22.846 00:38:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.846 00:38:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.125 00:38:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.125 00:38:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.125 00:38:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:23.125 [2024-04-27 00:38:56.677523] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:23.384 00:38:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.384 00:38:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.384 00:38:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.384 00:38:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:23.642 00:38:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:23.643 00:38:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:23.643 00:38:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:23.643 [2024-04-27 00:38:57.172197] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:23.643 [2024-04-27 00:38:57.172272] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:19:23.901 00:38:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:23.901 00:38:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:23.901 00:38:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.901 00:38:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:23.901 00:38:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:23.901 00:38:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:23.901 00:38:57 -- bdev/bdev_raid.sh@287 -- # killprocess 126450 00:19:23.901 00:38:57 -- common/autotest_common.sh@936 -- # '[' -z 126450 ']' 00:19:23.901 00:38:57 -- common/autotest_common.sh@940 -- # kill -0 126450 00:19:23.901 00:38:57 -- common/autotest_common.sh@941 -- # uname 00:19:23.901 00:38:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:23.901 00:38:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126450 00:19:24.160 00:38:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:24.160 00:38:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:24.160 00:38:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126450' 00:19:24.160 killing process with pid 126450 00:19:24.160 00:38:57 -- common/autotest_common.sh@955 -- # kill 126450 00:19:24.160 [2024-04-27 00:38:57.496304] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.160 00:38:57 -- common/autotest_common.sh@960 -- # wait 126450 00:19:24.160 [2024-04-27 00:38:57.496427] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:25.098 00:19:25.098 real 0m15.186s 00:19:25.098 user 0m27.108s 00:19:25.098 sys 0m1.778s 00:19:25.098 00:38:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:25.098 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 ************************************ 00:19:25.098 END TEST raid_state_function_test_sb 00:19:25.098 ************************************ 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:19:25.098 00:38:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:25.098 00:38:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.098 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 ************************************ 00:19:25.098 START TEST raid_superblock_test 00:19:25.098 ************************************ 00:19:25.098 00:38:58 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid0 4 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@357 -- # raid_pid=126910 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:25.098 00:38:58 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126910 /var/tmp/spdk-raid.sock 00:19:25.098 00:38:58 -- common/autotest_common.sh@817 -- # '[' -z 126910 ']' 00:19:25.098 00:38:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:25.098 00:38:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:25.098 00:38:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:25.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:25.098 00:38:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:25.098 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:19:25.098 [2024-04-27 00:38:58.630508] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:25.098 [2024-04-27 00:38:58.630689] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126910 ] 00:19:25.357 [2024-04-27 00:38:58.784747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.616 [2024-04-27 00:38:58.982296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.616 [2024-04-27 00:38:59.153871] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.183 00:38:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:26.183 00:38:59 -- common/autotest_common.sh@850 -- # return 0 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.183 00:38:59 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:26.442 malloc1 00:19:26.442 00:38:59 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:26.699 [2024-04-27 00:39:00.091258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:26.699 [2024-04-27 00:39:00.091399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.699 [2024-04-27 00:39:00.091453] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:26.699 [2024-04-27 00:39:00.091567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.699 [2024-04-27 00:39:00.094319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.699 [2024-04-27 00:39:00.094412] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:26.699 pt1 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:26.699 00:39:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:26.957 malloc2 00:19:26.957 00:39:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:27.216 [2024-04-27 00:39:00.607281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:27.216 [2024-04-27 00:39:00.607421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.216 [2024-04-27 00:39:00.607483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:27.216 [2024-04-27 00:39:00.607538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.216 [2024-04-27 00:39:00.609969] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.216 [2024-04-27 00:39:00.610035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:27.216 pt2 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.216 00:39:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:27.475 malloc3 00:19:27.475 00:39:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:27.732 [2024-04-27 00:39:01.085045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:27.732 [2024-04-27 00:39:01.085144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.733 [2024-04-27 00:39:01.085226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:27.733 [2024-04-27 00:39:01.085270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.733 [2024-04-27 00:39:01.087849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.733 [2024-04-27 00:39:01.087910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:27.733 pt3 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.733 00:39:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:27.990 malloc4 00:19:27.990 00:39:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:28.247 [2024-04-27 00:39:01.609734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:28.247 [2024-04-27 00:39:01.609856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.247 [2024-04-27 00:39:01.609891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:28.247 [2024-04-27 00:39:01.609934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.247 [2024-04-27 00:39:01.612421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.247 [2024-04-27 00:39:01.612490] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:28.247 pt4 00:19:28.247 00:39:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:28.247 00:39:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:28.247 00:39:01 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:28.247 [2024-04-27 00:39:01.833910] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:28.505 [2024-04-27 00:39:01.836282] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.505 [2024-04-27 00:39:01.836379] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:28.505 [2024-04-27 00:39:01.836467] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:28.505 [2024-04-27 00:39:01.836819] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:19:28.505 [2024-04-27 00:39:01.836843] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:28.505 [2024-04-27 00:39:01.836994] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:28.505 [2024-04-27 00:39:01.837403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:19:28.505 [2024-04-27 00:39:01.837428] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:19:28.505 [2024-04-27 00:39:01.837675] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.505 00:39:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.505 00:39:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.505 "name": "raid_bdev1", 00:19:28.505 "uuid": "4dcf07f3-2187-4edd-a6e8-20c09ec4e535", 00:19:28.505 "strip_size_kb": 64, 00:19:28.505 "state": "online", 00:19:28.505 "raid_level": "raid0", 00:19:28.505 "superblock": true, 00:19:28.505 "num_base_bdevs": 4, 00:19:28.505 "num_base_bdevs_discovered": 4, 00:19:28.505 "num_base_bdevs_operational": 4, 00:19:28.505 "base_bdevs_list": [ 00:19:28.505 { 00:19:28.505 "name": "pt1", 00:19:28.505 "uuid": "26be52de-6215-5218-b12c-020103a8282c", 00:19:28.505 "is_configured": true, 00:19:28.505 "data_offset": 2048, 00:19:28.505 "data_size": 63488 00:19:28.505 }, 00:19:28.505 { 00:19:28.505 "name": "pt2", 00:19:28.505 "uuid": "e3302e43-3c0b-5855-8bd8-0977ddc98bc0", 00:19:28.505 "is_configured": true, 00:19:28.505 "data_offset": 2048, 00:19:28.505 "data_size": 63488 00:19:28.505 }, 00:19:28.505 { 00:19:28.505 "name": "pt3", 00:19:28.505 "uuid": "504532d7-8916-5b59-935b-8ae88ee47647", 00:19:28.505 "is_configured": true, 00:19:28.505 "data_offset": 2048, 00:19:28.505 "data_size": 63488 00:19:28.505 }, 00:19:28.505 { 00:19:28.505 "name": "pt4", 00:19:28.505 "uuid": "493990ee-2c7e-541e-acce-458474080356", 00:19:28.505 "is_configured": true, 00:19:28.505 "data_offset": 2048, 00:19:28.505 "data_size": 63488 00:19:28.505 } 00:19:28.505 ] 00:19:28.505 }' 00:19:28.506 00:39:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.506 00:39:02 -- common/autotest_common.sh@10 -- # set +x 00:19:29.441 00:39:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:29.441 00:39:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:29.441 [2024-04-27 00:39:02.990472] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.441 00:39:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4dcf07f3-2187-4edd-a6e8-20c09ec4e535 00:19:29.441 00:39:03 -- bdev/bdev_raid.sh@380 -- # '[' -z 4dcf07f3-2187-4edd-a6e8-20c09ec4e535 ']' 00:19:29.441 00:39:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:29.719 [2024-04-27 00:39:03.258229] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.719 [2024-04-27 00:39:03.258267] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.719 [2024-04-27 00:39:03.258384] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.719 [2024-04-27 00:39:03.258460] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.719 [2024-04-27 00:39:03.258471] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:19:29.719 00:39:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.719 00:39:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:29.983 00:39:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:29.983 00:39:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:29.983 00:39:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:29.983 00:39:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:30.239 00:39:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.239 00:39:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:30.496 00:39:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.496 00:39:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:30.753 00:39:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:30.753 00:39:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:31.011 00:39:04 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:31.011 00:39:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:31.268 00:39:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:31.268 00:39:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:31.268 00:39:04 -- common/autotest_common.sh@638 -- # local es=0 00:19:31.268 00:39:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:31.268 00:39:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.268 00:39:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:31.268 00:39:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.268 00:39:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:31.268 00:39:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.268 00:39:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:31.268 00:39:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.268 00:39:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:31.268 00:39:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:31.527 [2024-04-27 00:39:04.882506] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:31.527 [2024-04-27 00:39:04.884548] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:31.527 [2024-04-27 00:39:04.884607] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:31.527 [2024-04-27 00:39:04.884656] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:31.527 [2024-04-27 00:39:04.884714] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:31.527 [2024-04-27 00:39:04.884850] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:31.527 [2024-04-27 00:39:04.884911] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:31.527 [2024-04-27 00:39:04.884977] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:31.527 [2024-04-27 00:39:04.885005] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.527 [2024-04-27 00:39:04.885016] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:19:31.527 request: 00:19:31.527 { 00:19:31.527 "name": "raid_bdev1", 00:19:31.527 "raid_level": "raid0", 00:19:31.527 "base_bdevs": [ 00:19:31.527 "malloc1", 00:19:31.527 "malloc2", 00:19:31.527 "malloc3", 00:19:31.527 "malloc4" 00:19:31.527 ], 00:19:31.527 "superblock": false, 00:19:31.527 "strip_size_kb": 64, 00:19:31.527 "method": "bdev_raid_create", 00:19:31.527 "req_id": 1 00:19:31.527 } 00:19:31.527 Got JSON-RPC error response 00:19:31.527 response: 00:19:31.527 { 00:19:31.527 "code": -17, 00:19:31.527 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:31.527 } 00:19:31.527 00:39:04 -- common/autotest_common.sh@641 -- # es=1 00:19:31.527 00:39:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:31.527 00:39:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:31.527 00:39:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:31.527 00:39:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.527 00:39:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:31.786 [2024-04-27 00:39:05.346579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:31.786 [2024-04-27 00:39:05.346686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.786 [2024-04-27 00:39:05.346721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:31.786 [2024-04-27 00:39:05.346748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.786 [2024-04-27 00:39:05.349185] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.786 [2024-04-27 00:39:05.349256] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:31.786 [2024-04-27 00:39:05.349414] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:31.786 [2024-04-27 00:39:05.349472] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:31.786 pt1 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.786 00:39:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.044 00:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.044 "name": "raid_bdev1", 00:19:32.044 "uuid": "4dcf07f3-2187-4edd-a6e8-20c09ec4e535", 00:19:32.044 "strip_size_kb": 64, 00:19:32.044 "state": "configuring", 00:19:32.044 "raid_level": "raid0", 00:19:32.044 "superblock": true, 00:19:32.044 "num_base_bdevs": 4, 00:19:32.044 "num_base_bdevs_discovered": 1, 00:19:32.044 "num_base_bdevs_operational": 4, 00:19:32.044 "base_bdevs_list": [ 00:19:32.044 { 00:19:32.044 "name": "pt1", 00:19:32.044 "uuid": "26be52de-6215-5218-b12c-020103a8282c", 00:19:32.044 "is_configured": true, 00:19:32.044 "data_offset": 2048, 00:19:32.044 "data_size": 63488 00:19:32.044 }, 00:19:32.044 { 00:19:32.044 "name": null, 00:19:32.044 "uuid": "e3302e43-3c0b-5855-8bd8-0977ddc98bc0", 00:19:32.044 "is_configured": false, 00:19:32.044 "data_offset": 2048, 00:19:32.044 "data_size": 63488 00:19:32.044 }, 00:19:32.044 { 00:19:32.044 "name": null, 00:19:32.044 "uuid": "504532d7-8916-5b59-935b-8ae88ee47647", 00:19:32.044 "is_configured": false, 00:19:32.044 "data_offset": 2048, 00:19:32.044 "data_size": 63488 00:19:32.044 }, 00:19:32.044 { 00:19:32.044 "name": null, 00:19:32.044 "uuid": "493990ee-2c7e-541e-acce-458474080356", 00:19:32.044 "is_configured": false, 00:19:32.044 "data_offset": 2048, 00:19:32.044 "data_size": 63488 00:19:32.044 } 00:19:32.044 ] 00:19:32.044 }' 00:19:32.044 00:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.044 00:39:05 -- common/autotest_common.sh@10 -- # set +x 00:19:32.977 00:39:06 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:32.977 00:39:06 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:32.977 [2024-04-27 00:39:06.493945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:32.977 [2024-04-27 00:39:06.494050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.977 [2024-04-27 00:39:06.494094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:32.977 [2024-04-27 00:39:06.494117] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.977 [2024-04-27 00:39:06.494718] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.977 [2024-04-27 00:39:06.494776] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:32.977 [2024-04-27 00:39:06.494905] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:32.977 [2024-04-27 00:39:06.494945] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:32.977 pt2 00:19:32.977 00:39:06 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:33.234 [2024-04-27 00:39:06.750012] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.234 00:39:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.492 00:39:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.492 "name": "raid_bdev1", 00:19:33.492 "uuid": "4dcf07f3-2187-4edd-a6e8-20c09ec4e535", 00:19:33.492 "strip_size_kb": 64, 00:19:33.492 "state": "configuring", 00:19:33.492 "raid_level": "raid0", 00:19:33.492 "superblock": true, 00:19:33.492 "num_base_bdevs": 4, 00:19:33.492 "num_base_bdevs_discovered": 1, 00:19:33.492 "num_base_bdevs_operational": 4, 00:19:33.492 "base_bdevs_list": [ 00:19:33.492 { 00:19:33.492 "name": "pt1", 00:19:33.492 "uuid": "26be52de-6215-5218-b12c-020103a8282c", 00:19:33.492 "is_configured": true, 00:19:33.492 "data_offset": 2048, 00:19:33.492 "data_size": 63488 00:19:33.492 }, 00:19:33.492 { 00:19:33.492 "name": null, 00:19:33.492 "uuid": "e3302e43-3c0b-5855-8bd8-0977ddc98bc0", 00:19:33.492 "is_configured": false, 00:19:33.492 "data_offset": 2048, 00:19:33.492 "data_size": 63488 00:19:33.492 }, 00:19:33.492 { 00:19:33.492 "name": null, 00:19:33.492 "uuid": "504532d7-8916-5b59-935b-8ae88ee47647", 00:19:33.492 "is_configured": false, 00:19:33.492 "data_offset": 2048, 00:19:33.492 "data_size": 63488 00:19:33.492 }, 00:19:33.492 { 00:19:33.492 "name": null, 00:19:33.492 "uuid": "493990ee-2c7e-541e-acce-458474080356", 00:19:33.492 "is_configured": false, 00:19:33.492 "data_offset": 2048, 00:19:33.492 "data_size": 63488 00:19:33.492 } 00:19:33.492 ] 00:19:33.492 }' 00:19:33.492 00:39:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.492 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:19:34.427 00:39:07 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:34.427 00:39:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:34.427 00:39:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.427 [2024-04-27 00:39:07.926277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.427 [2024-04-27 00:39:07.926421] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.427 [2024-04-27 00:39:07.926467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:34.427 [2024-04-27 00:39:07.926490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.427 [2024-04-27 00:39:07.927070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.427 [2024-04-27 00:39:07.927129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.427 [2024-04-27 00:39:07.927240] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:34.427 [2024-04-27 00:39:07.927266] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:34.427 pt2 00:19:34.428 00:39:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:34.428 00:39:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:34.428 00:39:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:34.685 [2024-04-27 00:39:08.142288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:34.685 [2024-04-27 00:39:08.142407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.685 [2024-04-27 00:39:08.142442] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:34.685 [2024-04-27 00:39:08.142469] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.685 [2024-04-27 00:39:08.142952] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.685 [2024-04-27 00:39:08.143051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:34.685 [2024-04-27 00:39:08.143152] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:34.686 [2024-04-27 00:39:08.143175] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:34.686 pt3 00:19:34.686 00:39:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:34.686 00:39:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:34.686 00:39:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:34.944 [2024-04-27 00:39:08.362346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:34.944 [2024-04-27 00:39:08.362489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.944 [2024-04-27 00:39:08.362541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:34.944 [2024-04-27 00:39:08.362574] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.944 [2024-04-27 00:39:08.363118] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.944 [2024-04-27 00:39:08.363170] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:34.944 [2024-04-27 00:39:08.363298] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:34.944 [2024-04-27 00:39:08.363324] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:34.944 [2024-04-27 00:39:08.363491] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:34.944 [2024-04-27 00:39:08.363503] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:34.944 [2024-04-27 00:39:08.363613] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:34.944 [2024-04-27 00:39:08.363947] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:34.944 [2024-04-27 00:39:08.363973] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:19:34.944 [2024-04-27 00:39:08.364135] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.944 pt4 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.944 00:39:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.202 00:39:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:35.202 "name": "raid_bdev1", 00:19:35.202 "uuid": "4dcf07f3-2187-4edd-a6e8-20c09ec4e535", 00:19:35.202 "strip_size_kb": 64, 00:19:35.202 "state": "online", 00:19:35.202 "raid_level": "raid0", 00:19:35.202 "superblock": true, 00:19:35.202 "num_base_bdevs": 4, 00:19:35.202 "num_base_bdevs_discovered": 4, 00:19:35.202 "num_base_bdevs_operational": 4, 00:19:35.202 "base_bdevs_list": [ 00:19:35.202 { 00:19:35.202 "name": "pt1", 00:19:35.202 "uuid": "26be52de-6215-5218-b12c-020103a8282c", 00:19:35.202 "is_configured": true, 00:19:35.202 "data_offset": 2048, 00:19:35.202 "data_size": 63488 00:19:35.202 }, 00:19:35.202 { 00:19:35.202 "name": "pt2", 00:19:35.202 "uuid": "e3302e43-3c0b-5855-8bd8-0977ddc98bc0", 00:19:35.202 "is_configured": true, 00:19:35.202 "data_offset": 2048, 00:19:35.202 "data_size": 63488 00:19:35.202 }, 00:19:35.202 { 00:19:35.203 "name": "pt3", 00:19:35.203 "uuid": "504532d7-8916-5b59-935b-8ae88ee47647", 00:19:35.203 "is_configured": true, 00:19:35.203 "data_offset": 2048, 00:19:35.203 "data_size": 63488 00:19:35.203 }, 00:19:35.203 { 00:19:35.203 "name": "pt4", 00:19:35.203 "uuid": "493990ee-2c7e-541e-acce-458474080356", 00:19:35.203 "is_configured": true, 00:19:35.203 "data_offset": 2048, 00:19:35.203 "data_size": 63488 00:19:35.203 } 00:19:35.203 ] 00:19:35.203 }' 00:19:35.203 00:39:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:35.203 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:19:35.823 00:39:09 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:35.823 00:39:09 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:36.119 [2024-04-27 00:39:09.461937] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.119 00:39:09 -- bdev/bdev_raid.sh@430 -- # '[' 4dcf07f3-2187-4edd-a6e8-20c09ec4e535 '!=' 4dcf07f3-2187-4edd-a6e8-20c09ec4e535 ']' 00:19:36.119 00:39:09 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:19:36.119 00:39:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:36.119 00:39:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:36.119 00:39:09 -- bdev/bdev_raid.sh@511 -- # killprocess 126910 00:19:36.119 00:39:09 -- common/autotest_common.sh@936 -- # '[' -z 126910 ']' 00:19:36.119 00:39:09 -- common/autotest_common.sh@940 -- # kill -0 126910 00:19:36.119 00:39:09 -- common/autotest_common.sh@941 -- # uname 00:19:36.119 00:39:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:36.119 00:39:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 126910 00:19:36.119 killing process with pid 126910 00:19:36.119 00:39:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:36.119 00:39:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:36.119 00:39:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 126910' 00:19:36.119 00:39:09 -- common/autotest_common.sh@955 -- # kill 126910 00:19:36.119 00:39:09 -- common/autotest_common.sh@960 -- # wait 126910 00:19:36.119 [2024-04-27 00:39:09.496783] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:36.119 [2024-04-27 00:39:09.496849] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.119 [2024-04-27 00:39:09.496962] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.119 [2024-04-27 00:39:09.496973] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:19:36.378 [2024-04-27 00:39:09.776810] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.312 ************************************ 00:19:37.312 END TEST raid_superblock_test 00:19:37.312 ************************************ 00:19:37.312 00:39:10 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:37.312 00:19:37.312 real 0m12.207s 00:19:37.312 user 0m21.236s 00:19:37.312 sys 0m1.501s 00:19:37.312 00:39:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:37.312 00:39:10 -- common/autotest_common.sh@10 -- # set +x 00:19:37.312 00:39:10 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:37.312 00:39:10 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:37.312 00:39:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:37.312 00:39:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:37.312 00:39:10 -- common/autotest_common.sh@10 -- # set +x 00:19:37.312 ************************************ 00:19:37.312 START TEST raid_state_function_test 00:19:37.312 ************************************ 00:19:37.312 00:39:10 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 false 00:19:37.312 00:39:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:37.312 00:39:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:37.312 00:39:10 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=127247 00:19:37.313 Process raid pid: 127247 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127247' 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127247 /var/tmp/spdk-raid.sock 00:19:37.313 00:39:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:37.313 00:39:10 -- common/autotest_common.sh@817 -- # '[' -z 127247 ']' 00:19:37.313 00:39:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:37.313 00:39:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:37.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:37.313 00:39:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:37.313 00:39:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:37.313 00:39:10 -- common/autotest_common.sh@10 -- # set +x 00:19:37.572 [2024-04-27 00:39:10.936042] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:37.572 [2024-04-27 00:39:10.936868] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.572 [2024-04-27 00:39:11.106055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.830 [2024-04-27 00:39:11.293656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.089 [2024-04-27 00:39:11.466567] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.348 00:39:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:38.348 00:39:11 -- common/autotest_common.sh@850 -- # return 0 00:19:38.348 00:39:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:38.607 [2024-04-27 00:39:12.094904] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.607 [2024-04-27 00:39:12.095066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.607 [2024-04-27 00:39:12.095081] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:38.607 [2024-04-27 00:39:12.095105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:38.607 [2024-04-27 00:39:12.095114] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:38.607 [2024-04-27 00:39:12.095155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:38.607 [2024-04-27 00:39:12.095164] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:38.607 [2024-04-27 00:39:12.095188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.607 00:39:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.865 00:39:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.865 "name": "Existed_Raid", 00:19:38.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.865 "strip_size_kb": 64, 00:19:38.865 "state": "configuring", 00:19:38.865 "raid_level": "concat", 00:19:38.865 "superblock": false, 00:19:38.865 "num_base_bdevs": 4, 00:19:38.865 "num_base_bdevs_discovered": 0, 00:19:38.865 "num_base_bdevs_operational": 4, 00:19:38.865 "base_bdevs_list": [ 00:19:38.865 { 00:19:38.865 "name": "BaseBdev1", 00:19:38.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.865 "is_configured": false, 00:19:38.865 "data_offset": 0, 00:19:38.865 "data_size": 0 00:19:38.865 }, 00:19:38.865 { 00:19:38.865 "name": "BaseBdev2", 00:19:38.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.865 "is_configured": false, 00:19:38.865 "data_offset": 0, 00:19:38.865 "data_size": 0 00:19:38.865 }, 00:19:38.865 { 00:19:38.865 "name": "BaseBdev3", 00:19:38.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.865 "is_configured": false, 00:19:38.865 "data_offset": 0, 00:19:38.865 "data_size": 0 00:19:38.865 }, 00:19:38.865 { 00:19:38.865 "name": "BaseBdev4", 00:19:38.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.865 "is_configured": false, 00:19:38.865 "data_offset": 0, 00:19:38.865 "data_size": 0 00:19:38.865 } 00:19:38.865 ] 00:19:38.865 }' 00:19:38.865 00:39:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.865 00:39:12 -- common/autotest_common.sh@10 -- # set +x 00:19:39.433 00:39:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:39.691 [2024-04-27 00:39:13.203129] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:39.691 [2024-04-27 00:39:13.203185] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:19:39.691 00:39:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:39.948 [2024-04-27 00:39:13.471221] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:39.949 [2024-04-27 00:39:13.471301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:39.949 [2024-04-27 00:39:13.471329] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:39.949 [2024-04-27 00:39:13.471353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:39.949 [2024-04-27 00:39:13.471361] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:39.949 [2024-04-27 00:39:13.471411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:39.949 [2024-04-27 00:39:13.471419] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:39.949 [2024-04-27 00:39:13.471440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:39.949 00:39:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:40.207 [2024-04-27 00:39:13.705792] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.207 BaseBdev1 00:19:40.207 00:39:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:40.207 00:39:13 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:40.207 00:39:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:40.207 00:39:13 -- common/autotest_common.sh@887 -- # local i 00:19:40.207 00:39:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:40.207 00:39:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:40.207 00:39:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.464 00:39:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:40.723 [ 00:19:40.723 { 00:19:40.723 "name": "BaseBdev1", 00:19:40.723 "aliases": [ 00:19:40.723 "c8657687-6f94-496a-819f-04149b5e6144" 00:19:40.723 ], 00:19:40.723 "product_name": "Malloc disk", 00:19:40.723 "block_size": 512, 00:19:40.723 "num_blocks": 65536, 00:19:40.723 "uuid": "c8657687-6f94-496a-819f-04149b5e6144", 00:19:40.723 "assigned_rate_limits": { 00:19:40.723 "rw_ios_per_sec": 0, 00:19:40.723 "rw_mbytes_per_sec": 0, 00:19:40.723 "r_mbytes_per_sec": 0, 00:19:40.723 "w_mbytes_per_sec": 0 00:19:40.723 }, 00:19:40.723 "claimed": true, 00:19:40.723 "claim_type": "exclusive_write", 00:19:40.723 "zoned": false, 00:19:40.723 "supported_io_types": { 00:19:40.723 "read": true, 00:19:40.723 "write": true, 00:19:40.723 "unmap": true, 00:19:40.723 "write_zeroes": true, 00:19:40.723 "flush": true, 00:19:40.723 "reset": true, 00:19:40.723 "compare": false, 00:19:40.723 "compare_and_write": false, 00:19:40.723 "abort": true, 00:19:40.723 "nvme_admin": false, 00:19:40.723 "nvme_io": false 00:19:40.723 }, 00:19:40.723 "memory_domains": [ 00:19:40.723 { 00:19:40.723 "dma_device_id": "system", 00:19:40.723 "dma_device_type": 1 00:19:40.723 }, 00:19:40.723 { 00:19:40.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.723 "dma_device_type": 2 00:19:40.723 } 00:19:40.723 ], 00:19:40.723 "driver_specific": {} 00:19:40.723 } 00:19:40.723 ] 00:19:40.723 00:39:14 -- common/autotest_common.sh@893 -- # return 0 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.723 00:39:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:40.981 00:39:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.981 "name": "Existed_Raid", 00:19:40.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.981 "strip_size_kb": 64, 00:19:40.981 "state": "configuring", 00:19:40.981 "raid_level": "concat", 00:19:40.981 "superblock": false, 00:19:40.981 "num_base_bdevs": 4, 00:19:40.981 "num_base_bdevs_discovered": 1, 00:19:40.981 "num_base_bdevs_operational": 4, 00:19:40.982 "base_bdevs_list": [ 00:19:40.982 { 00:19:40.982 "name": "BaseBdev1", 00:19:40.982 "uuid": "c8657687-6f94-496a-819f-04149b5e6144", 00:19:40.982 "is_configured": true, 00:19:40.982 "data_offset": 0, 00:19:40.982 "data_size": 65536 00:19:40.982 }, 00:19:40.982 { 00:19:40.982 "name": "BaseBdev2", 00:19:40.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.982 "is_configured": false, 00:19:40.982 "data_offset": 0, 00:19:40.982 "data_size": 0 00:19:40.982 }, 00:19:40.982 { 00:19:40.982 "name": "BaseBdev3", 00:19:40.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.982 "is_configured": false, 00:19:40.982 "data_offset": 0, 00:19:40.982 "data_size": 0 00:19:40.982 }, 00:19:40.982 { 00:19:40.982 "name": "BaseBdev4", 00:19:40.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:40.982 "is_configured": false, 00:19:40.982 "data_offset": 0, 00:19:40.982 "data_size": 0 00:19:40.982 } 00:19:40.982 ] 00:19:40.982 }' 00:19:40.982 00:39:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.982 00:39:14 -- common/autotest_common.sh@10 -- # set +x 00:19:41.548 00:39:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:41.807 [2024-04-27 00:39:15.214185] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:41.807 [2024-04-27 00:39:15.214263] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:19:41.807 00:39:15 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:41.807 00:39:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:42.117 [2024-04-27 00:39:15.510292] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.117 [2024-04-27 00:39:15.512503] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.117 [2024-04-27 00:39:15.512592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.117 [2024-04-27 00:39:15.512621] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.117 [2024-04-27 00:39:15.512646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.117 [2024-04-27 00:39:15.512655] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:42.117 [2024-04-27 00:39:15.512671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.117 00:39:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.376 00:39:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.376 "name": "Existed_Raid", 00:19:42.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.376 "strip_size_kb": 64, 00:19:42.376 "state": "configuring", 00:19:42.376 "raid_level": "concat", 00:19:42.376 "superblock": false, 00:19:42.376 "num_base_bdevs": 4, 00:19:42.376 "num_base_bdevs_discovered": 1, 00:19:42.376 "num_base_bdevs_operational": 4, 00:19:42.376 "base_bdevs_list": [ 00:19:42.376 { 00:19:42.376 "name": "BaseBdev1", 00:19:42.376 "uuid": "c8657687-6f94-496a-819f-04149b5e6144", 00:19:42.376 "is_configured": true, 00:19:42.376 "data_offset": 0, 00:19:42.376 "data_size": 65536 00:19:42.376 }, 00:19:42.376 { 00:19:42.376 "name": "BaseBdev2", 00:19:42.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.376 "is_configured": false, 00:19:42.376 "data_offset": 0, 00:19:42.376 "data_size": 0 00:19:42.376 }, 00:19:42.376 { 00:19:42.376 "name": "BaseBdev3", 00:19:42.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.376 "is_configured": false, 00:19:42.376 "data_offset": 0, 00:19:42.376 "data_size": 0 00:19:42.376 }, 00:19:42.376 { 00:19:42.376 "name": "BaseBdev4", 00:19:42.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.376 "is_configured": false, 00:19:42.376 "data_offset": 0, 00:19:42.376 "data_size": 0 00:19:42.376 } 00:19:42.376 ] 00:19:42.376 }' 00:19:42.376 00:39:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.376 00:39:15 -- common/autotest_common.sh@10 -- # set +x 00:19:42.943 00:39:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:43.201 [2024-04-27 00:39:16.759759] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.202 BaseBdev2 00:19:43.202 00:39:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:43.202 00:39:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:43.202 00:39:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:43.202 00:39:16 -- common/autotest_common.sh@887 -- # local i 00:19:43.202 00:39:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:43.202 00:39:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:43.202 00:39:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:43.460 00:39:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:43.719 [ 00:19:43.719 { 00:19:43.719 "name": "BaseBdev2", 00:19:43.719 "aliases": [ 00:19:43.719 "7e82ec12-1853-46f4-ab4c-f12ac8764e3c" 00:19:43.719 ], 00:19:43.719 "product_name": "Malloc disk", 00:19:43.719 "block_size": 512, 00:19:43.719 "num_blocks": 65536, 00:19:43.719 "uuid": "7e82ec12-1853-46f4-ab4c-f12ac8764e3c", 00:19:43.719 "assigned_rate_limits": { 00:19:43.719 "rw_ios_per_sec": 0, 00:19:43.719 "rw_mbytes_per_sec": 0, 00:19:43.719 "r_mbytes_per_sec": 0, 00:19:43.719 "w_mbytes_per_sec": 0 00:19:43.719 }, 00:19:43.719 "claimed": true, 00:19:43.719 "claim_type": "exclusive_write", 00:19:43.719 "zoned": false, 00:19:43.719 "supported_io_types": { 00:19:43.719 "read": true, 00:19:43.719 "write": true, 00:19:43.719 "unmap": true, 00:19:43.719 "write_zeroes": true, 00:19:43.719 "flush": true, 00:19:43.719 "reset": true, 00:19:43.719 "compare": false, 00:19:43.719 "compare_and_write": false, 00:19:43.719 "abort": true, 00:19:43.719 "nvme_admin": false, 00:19:43.719 "nvme_io": false 00:19:43.719 }, 00:19:43.719 "memory_domains": [ 00:19:43.719 { 00:19:43.719 "dma_device_id": "system", 00:19:43.719 "dma_device_type": 1 00:19:43.719 }, 00:19:43.719 { 00:19:43.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:43.719 "dma_device_type": 2 00:19:43.719 } 00:19:43.719 ], 00:19:43.719 "driver_specific": {} 00:19:43.719 } 00:19:43.719 ] 00:19:43.719 00:39:17 -- common/autotest_common.sh@893 -- # return 0 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.719 00:39:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:43.977 00:39:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:43.978 "name": "Existed_Raid", 00:19:43.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.978 "strip_size_kb": 64, 00:19:43.978 "state": "configuring", 00:19:43.978 "raid_level": "concat", 00:19:43.978 "superblock": false, 00:19:43.978 "num_base_bdevs": 4, 00:19:43.978 "num_base_bdevs_discovered": 2, 00:19:43.978 "num_base_bdevs_operational": 4, 00:19:43.978 "base_bdevs_list": [ 00:19:43.978 { 00:19:43.978 "name": "BaseBdev1", 00:19:43.978 "uuid": "c8657687-6f94-496a-819f-04149b5e6144", 00:19:43.978 "is_configured": true, 00:19:43.978 "data_offset": 0, 00:19:43.978 "data_size": 65536 00:19:43.978 }, 00:19:43.978 { 00:19:43.978 "name": "BaseBdev2", 00:19:43.978 "uuid": "7e82ec12-1853-46f4-ab4c-f12ac8764e3c", 00:19:43.978 "is_configured": true, 00:19:43.978 "data_offset": 0, 00:19:43.978 "data_size": 65536 00:19:43.978 }, 00:19:43.978 { 00:19:43.978 "name": "BaseBdev3", 00:19:43.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.978 "is_configured": false, 00:19:43.978 "data_offset": 0, 00:19:43.978 "data_size": 0 00:19:43.978 }, 00:19:43.978 { 00:19:43.978 "name": "BaseBdev4", 00:19:43.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.978 "is_configured": false, 00:19:43.978 "data_offset": 0, 00:19:43.978 "data_size": 0 00:19:43.978 } 00:19:43.978 ] 00:19:43.978 }' 00:19:43.978 00:39:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:43.978 00:39:17 -- common/autotest_common.sh@10 -- # set +x 00:19:44.913 00:39:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:44.913 [2024-04-27 00:39:18.441955] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:44.913 BaseBdev3 00:19:44.913 00:39:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:44.913 00:39:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:44.913 00:39:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:44.913 00:39:18 -- common/autotest_common.sh@887 -- # local i 00:19:44.913 00:39:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:44.913 00:39:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:44.913 00:39:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:45.172 00:39:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:45.430 [ 00:19:45.430 { 00:19:45.430 "name": "BaseBdev3", 00:19:45.430 "aliases": [ 00:19:45.430 "853d5440-82aa-466a-84ec-1e9f0ca50e16" 00:19:45.430 ], 00:19:45.430 "product_name": "Malloc disk", 00:19:45.430 "block_size": 512, 00:19:45.430 "num_blocks": 65536, 00:19:45.430 "uuid": "853d5440-82aa-466a-84ec-1e9f0ca50e16", 00:19:45.430 "assigned_rate_limits": { 00:19:45.430 "rw_ios_per_sec": 0, 00:19:45.430 "rw_mbytes_per_sec": 0, 00:19:45.430 "r_mbytes_per_sec": 0, 00:19:45.430 "w_mbytes_per_sec": 0 00:19:45.430 }, 00:19:45.430 "claimed": true, 00:19:45.430 "claim_type": "exclusive_write", 00:19:45.430 "zoned": false, 00:19:45.430 "supported_io_types": { 00:19:45.430 "read": true, 00:19:45.430 "write": true, 00:19:45.430 "unmap": true, 00:19:45.430 "write_zeroes": true, 00:19:45.430 "flush": true, 00:19:45.430 "reset": true, 00:19:45.430 "compare": false, 00:19:45.430 "compare_and_write": false, 00:19:45.430 "abort": true, 00:19:45.430 "nvme_admin": false, 00:19:45.430 "nvme_io": false 00:19:45.430 }, 00:19:45.430 "memory_domains": [ 00:19:45.430 { 00:19:45.430 "dma_device_id": "system", 00:19:45.430 "dma_device_type": 1 00:19:45.430 }, 00:19:45.430 { 00:19:45.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.430 "dma_device_type": 2 00:19:45.430 } 00:19:45.430 ], 00:19:45.430 "driver_specific": {} 00:19:45.430 } 00:19:45.430 ] 00:19:45.430 00:39:18 -- common/autotest_common.sh@893 -- # return 0 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.430 00:39:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.688 00:39:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:45.688 "name": "Existed_Raid", 00:19:45.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.688 "strip_size_kb": 64, 00:19:45.688 "state": "configuring", 00:19:45.688 "raid_level": "concat", 00:19:45.688 "superblock": false, 00:19:45.688 "num_base_bdevs": 4, 00:19:45.688 "num_base_bdevs_discovered": 3, 00:19:45.688 "num_base_bdevs_operational": 4, 00:19:45.688 "base_bdevs_list": [ 00:19:45.688 { 00:19:45.688 "name": "BaseBdev1", 00:19:45.688 "uuid": "c8657687-6f94-496a-819f-04149b5e6144", 00:19:45.688 "is_configured": true, 00:19:45.688 "data_offset": 0, 00:19:45.688 "data_size": 65536 00:19:45.688 }, 00:19:45.688 { 00:19:45.688 "name": "BaseBdev2", 00:19:45.688 "uuid": "7e82ec12-1853-46f4-ab4c-f12ac8764e3c", 00:19:45.688 "is_configured": true, 00:19:45.688 "data_offset": 0, 00:19:45.688 "data_size": 65536 00:19:45.688 }, 00:19:45.688 { 00:19:45.688 "name": "BaseBdev3", 00:19:45.688 "uuid": "853d5440-82aa-466a-84ec-1e9f0ca50e16", 00:19:45.688 "is_configured": true, 00:19:45.688 "data_offset": 0, 00:19:45.688 "data_size": 65536 00:19:45.688 }, 00:19:45.688 { 00:19:45.688 "name": "BaseBdev4", 00:19:45.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.688 "is_configured": false, 00:19:45.688 "data_offset": 0, 00:19:45.688 "data_size": 0 00:19:45.688 } 00:19:45.688 ] 00:19:45.688 }' 00:19:45.688 00:39:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:45.688 00:39:19 -- common/autotest_common.sh@10 -- # set +x 00:19:46.254 00:39:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:46.513 [2024-04-27 00:39:20.040750] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:46.513 [2024-04-27 00:39:20.040821] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:19:46.513 [2024-04-27 00:39:20.040830] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:46.513 [2024-04-27 00:39:20.040953] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:46.513 [2024-04-27 00:39:20.041374] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:19:46.513 [2024-04-27 00:39:20.041400] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:19:46.513 [2024-04-27 00:39:20.041667] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.513 BaseBdev4 00:19:46.513 00:39:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:46.513 00:39:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:19:46.513 00:39:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:46.513 00:39:20 -- common/autotest_common.sh@887 -- # local i 00:19:46.513 00:39:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:46.513 00:39:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:46.513 00:39:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.771 00:39:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:47.029 [ 00:19:47.029 { 00:19:47.029 "name": "BaseBdev4", 00:19:47.029 "aliases": [ 00:19:47.029 "fe6303a2-a570-4d1c-a90f-56e6fcfbc50c" 00:19:47.029 ], 00:19:47.029 "product_name": "Malloc disk", 00:19:47.029 "block_size": 512, 00:19:47.029 "num_blocks": 65536, 00:19:47.029 "uuid": "fe6303a2-a570-4d1c-a90f-56e6fcfbc50c", 00:19:47.029 "assigned_rate_limits": { 00:19:47.029 "rw_ios_per_sec": 0, 00:19:47.029 "rw_mbytes_per_sec": 0, 00:19:47.029 "r_mbytes_per_sec": 0, 00:19:47.030 "w_mbytes_per_sec": 0 00:19:47.030 }, 00:19:47.030 "claimed": true, 00:19:47.030 "claim_type": "exclusive_write", 00:19:47.030 "zoned": false, 00:19:47.030 "supported_io_types": { 00:19:47.030 "read": true, 00:19:47.030 "write": true, 00:19:47.030 "unmap": true, 00:19:47.030 "write_zeroes": true, 00:19:47.030 "flush": true, 00:19:47.030 "reset": true, 00:19:47.030 "compare": false, 00:19:47.030 "compare_and_write": false, 00:19:47.030 "abort": true, 00:19:47.030 "nvme_admin": false, 00:19:47.030 "nvme_io": false 00:19:47.030 }, 00:19:47.030 "memory_domains": [ 00:19:47.030 { 00:19:47.030 "dma_device_id": "system", 00:19:47.030 "dma_device_type": 1 00:19:47.030 }, 00:19:47.030 { 00:19:47.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.030 "dma_device_type": 2 00:19:47.030 } 00:19:47.030 ], 00:19:47.030 "driver_specific": {} 00:19:47.030 } 00:19:47.030 ] 00:19:47.030 00:39:20 -- common/autotest_common.sh@893 -- # return 0 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.030 00:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.288 00:39:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.288 "name": "Existed_Raid", 00:19:47.288 "uuid": "8e0f9422-33f9-4cd1-af92-a821ed7345d2", 00:19:47.288 "strip_size_kb": 64, 00:19:47.288 "state": "online", 00:19:47.288 "raid_level": "concat", 00:19:47.288 "superblock": false, 00:19:47.288 "num_base_bdevs": 4, 00:19:47.288 "num_base_bdevs_discovered": 4, 00:19:47.288 "num_base_bdevs_operational": 4, 00:19:47.288 "base_bdevs_list": [ 00:19:47.288 { 00:19:47.288 "name": "BaseBdev1", 00:19:47.288 "uuid": "c8657687-6f94-496a-819f-04149b5e6144", 00:19:47.288 "is_configured": true, 00:19:47.288 "data_offset": 0, 00:19:47.288 "data_size": 65536 00:19:47.288 }, 00:19:47.288 { 00:19:47.288 "name": "BaseBdev2", 00:19:47.288 "uuid": "7e82ec12-1853-46f4-ab4c-f12ac8764e3c", 00:19:47.288 "is_configured": true, 00:19:47.288 "data_offset": 0, 00:19:47.288 "data_size": 65536 00:19:47.288 }, 00:19:47.288 { 00:19:47.288 "name": "BaseBdev3", 00:19:47.288 "uuid": "853d5440-82aa-466a-84ec-1e9f0ca50e16", 00:19:47.288 "is_configured": true, 00:19:47.288 "data_offset": 0, 00:19:47.288 "data_size": 65536 00:19:47.288 }, 00:19:47.288 { 00:19:47.288 "name": "BaseBdev4", 00:19:47.288 "uuid": "fe6303a2-a570-4d1c-a90f-56e6fcfbc50c", 00:19:47.288 "is_configured": true, 00:19:47.288 "data_offset": 0, 00:19:47.288 "data_size": 65536 00:19:47.288 } 00:19:47.288 ] 00:19:47.288 }' 00:19:47.288 00:39:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.288 00:39:20 -- common/autotest_common.sh@10 -- # set +x 00:19:47.855 00:39:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:48.114 [2024-04-27 00:39:21.497259] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:48.114 [2024-04-27 00:39:21.497299] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.114 [2024-04-27 00:39:21.497384] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.114 00:39:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.373 00:39:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.373 "name": "Existed_Raid", 00:19:48.373 "uuid": "8e0f9422-33f9-4cd1-af92-a821ed7345d2", 00:19:48.373 "strip_size_kb": 64, 00:19:48.373 "state": "offline", 00:19:48.373 "raid_level": "concat", 00:19:48.373 "superblock": false, 00:19:48.373 "num_base_bdevs": 4, 00:19:48.373 "num_base_bdevs_discovered": 3, 00:19:48.373 "num_base_bdevs_operational": 3, 00:19:48.373 "base_bdevs_list": [ 00:19:48.373 { 00:19:48.373 "name": null, 00:19:48.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.373 "is_configured": false, 00:19:48.373 "data_offset": 0, 00:19:48.373 "data_size": 65536 00:19:48.373 }, 00:19:48.373 { 00:19:48.373 "name": "BaseBdev2", 00:19:48.373 "uuid": "7e82ec12-1853-46f4-ab4c-f12ac8764e3c", 00:19:48.373 "is_configured": true, 00:19:48.373 "data_offset": 0, 00:19:48.373 "data_size": 65536 00:19:48.373 }, 00:19:48.373 { 00:19:48.373 "name": "BaseBdev3", 00:19:48.373 "uuid": "853d5440-82aa-466a-84ec-1e9f0ca50e16", 00:19:48.373 "is_configured": true, 00:19:48.373 "data_offset": 0, 00:19:48.373 "data_size": 65536 00:19:48.373 }, 00:19:48.373 { 00:19:48.373 "name": "BaseBdev4", 00:19:48.373 "uuid": "fe6303a2-a570-4d1c-a90f-56e6fcfbc50c", 00:19:48.373 "is_configured": true, 00:19:48.373 "data_offset": 0, 00:19:48.373 "data_size": 65536 00:19:48.373 } 00:19:48.373 ] 00:19:48.373 }' 00:19:48.373 00:39:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.373 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:19:48.939 00:39:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:48.939 00:39:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:48.939 00:39:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:48.939 00:39:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.197 00:39:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:49.197 00:39:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.197 00:39:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:49.482 [2024-04-27 00:39:22.903736] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:49.482 00:39:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:49.482 00:39:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:49.482 00:39:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.482 00:39:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:49.762 00:39:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:49.762 00:39:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:49.762 00:39:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:50.020 [2024-04-27 00:39:23.392457] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:50.020 00:39:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:50.020 00:39:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:50.020 00:39:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.020 00:39:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:50.277 00:39:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:50.277 00:39:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:50.277 00:39:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:50.535 [2024-04-27 00:39:23.925297] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:50.535 [2024-04-27 00:39:23.925371] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:19:50.535 00:39:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:50.535 00:39:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:50.535 00:39:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.535 00:39:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:50.793 00:39:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:50.793 00:39:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:50.793 00:39:24 -- bdev/bdev_raid.sh@287 -- # killprocess 127247 00:19:50.793 00:39:24 -- common/autotest_common.sh@936 -- # '[' -z 127247 ']' 00:19:50.793 00:39:24 -- common/autotest_common.sh@940 -- # kill -0 127247 00:19:50.793 00:39:24 -- common/autotest_common.sh@941 -- # uname 00:19:50.793 00:39:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.793 00:39:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127247 00:19:50.793 00:39:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:50.793 00:39:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:50.793 00:39:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127247' 00:19:50.793 killing process with pid 127247 00:19:50.793 00:39:24 -- common/autotest_common.sh@955 -- # kill 127247 00:19:50.793 [2024-04-27 00:39:24.298488] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:50.793 00:39:24 -- common/autotest_common.sh@960 -- # wait 127247 00:19:50.793 [2024-04-27 00:39:24.298624] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.728 00:39:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:51.728 00:19:51.728 real 0m14.412s 00:19:51.728 user 0m25.891s 00:19:51.728 sys 0m1.582s 00:19:51.728 00:39:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:51.728 ************************************ 00:19:51.728 END TEST raid_state_function_test 00:19:51.728 ************************************ 00:19:51.728 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:51.987 00:39:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:51.987 00:39:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.987 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:51.987 ************************************ 00:19:51.987 START TEST raid_state_function_test_sb 00:19:51.987 ************************************ 00:19:51.987 00:39:25 -- common/autotest_common.sh@1111 -- # raid_state_function_test concat 4 true 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=127771 00:19:51.987 Process raid pid: 127771 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127771' 00:19:51.987 00:39:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127771 /var/tmp/spdk-raid.sock 00:19:51.987 00:39:25 -- common/autotest_common.sh@817 -- # '[' -z 127771 ']' 00:19:51.987 00:39:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:51.987 00:39:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:51.987 00:39:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:51.987 00:39:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.987 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:19:51.987 [2024-04-27 00:39:25.443119] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:19:51.987 [2024-04-27 00:39:25.443371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.247 [2024-04-27 00:39:25.617443] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.247 [2024-04-27 00:39:25.799066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.505 [2024-04-27 00:39:25.974571] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.071 00:39:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.071 00:39:26 -- common/autotest_common.sh@850 -- # return 0 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:53.071 [2024-04-27 00:39:26.580445] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:53.071 [2024-04-27 00:39:26.580515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:53.071 [2024-04-27 00:39:26.580545] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.071 [2024-04-27 00:39:26.580566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.071 [2024-04-27 00:39:26.580573] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.071 [2024-04-27 00:39:26.580610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.071 [2024-04-27 00:39:26.580618] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:53.071 [2024-04-27 00:39:26.580639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.071 00:39:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.329 00:39:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.329 "name": "Existed_Raid", 00:19:53.329 "uuid": "fd4a33ca-2842-4fa8-8d6b-5562ba9eae81", 00:19:53.329 "strip_size_kb": 64, 00:19:53.329 "state": "configuring", 00:19:53.329 "raid_level": "concat", 00:19:53.329 "superblock": true, 00:19:53.329 "num_base_bdevs": 4, 00:19:53.329 "num_base_bdevs_discovered": 0, 00:19:53.329 "num_base_bdevs_operational": 4, 00:19:53.329 "base_bdevs_list": [ 00:19:53.329 { 00:19:53.329 "name": "BaseBdev1", 00:19:53.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.329 "is_configured": false, 00:19:53.329 "data_offset": 0, 00:19:53.329 "data_size": 0 00:19:53.329 }, 00:19:53.329 { 00:19:53.329 "name": "BaseBdev2", 00:19:53.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.329 "is_configured": false, 00:19:53.329 "data_offset": 0, 00:19:53.329 "data_size": 0 00:19:53.329 }, 00:19:53.329 { 00:19:53.329 "name": "BaseBdev3", 00:19:53.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.329 "is_configured": false, 00:19:53.329 "data_offset": 0, 00:19:53.329 "data_size": 0 00:19:53.329 }, 00:19:53.329 { 00:19:53.329 "name": "BaseBdev4", 00:19:53.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.329 "is_configured": false, 00:19:53.329 "data_offset": 0, 00:19:53.329 "data_size": 0 00:19:53.329 } 00:19:53.329 ] 00:19:53.329 }' 00:19:53.329 00:39:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.329 00:39:26 -- common/autotest_common.sh@10 -- # set +x 00:19:53.896 00:39:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:54.154 [2024-04-27 00:39:27.676527] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:54.154 [2024-04-27 00:39:27.676588] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:19:54.154 00:39:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:54.412 [2024-04-27 00:39:27.884623] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.412 [2024-04-27 00:39:27.884741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.412 [2024-04-27 00:39:27.884753] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:54.412 [2024-04-27 00:39:27.884778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:54.412 [2024-04-27 00:39:27.884786] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:54.412 [2024-04-27 00:39:27.884832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:54.412 [2024-04-27 00:39:27.884840] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:54.412 [2024-04-27 00:39:27.884862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:54.412 00:39:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:54.671 [2024-04-27 00:39:28.115902] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.671 BaseBdev1 00:19:54.671 00:39:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:54.671 00:39:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:54.671 00:39:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:54.671 00:39:28 -- common/autotest_common.sh@887 -- # local i 00:19:54.671 00:39:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:54.671 00:39:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:54.671 00:39:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.934 00:39:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:55.193 [ 00:19:55.193 { 00:19:55.193 "name": "BaseBdev1", 00:19:55.193 "aliases": [ 00:19:55.193 "d64ca42a-0da3-490e-8785-20196eb65e0a" 00:19:55.193 ], 00:19:55.193 "product_name": "Malloc disk", 00:19:55.193 "block_size": 512, 00:19:55.193 "num_blocks": 65536, 00:19:55.193 "uuid": "d64ca42a-0da3-490e-8785-20196eb65e0a", 00:19:55.193 "assigned_rate_limits": { 00:19:55.193 "rw_ios_per_sec": 0, 00:19:55.193 "rw_mbytes_per_sec": 0, 00:19:55.193 "r_mbytes_per_sec": 0, 00:19:55.193 "w_mbytes_per_sec": 0 00:19:55.193 }, 00:19:55.193 "claimed": true, 00:19:55.193 "claim_type": "exclusive_write", 00:19:55.193 "zoned": false, 00:19:55.193 "supported_io_types": { 00:19:55.193 "read": true, 00:19:55.193 "write": true, 00:19:55.193 "unmap": true, 00:19:55.193 "write_zeroes": true, 00:19:55.193 "flush": true, 00:19:55.193 "reset": true, 00:19:55.193 "compare": false, 00:19:55.193 "compare_and_write": false, 00:19:55.193 "abort": true, 00:19:55.193 "nvme_admin": false, 00:19:55.193 "nvme_io": false 00:19:55.193 }, 00:19:55.193 "memory_domains": [ 00:19:55.193 { 00:19:55.193 "dma_device_id": "system", 00:19:55.193 "dma_device_type": 1 00:19:55.193 }, 00:19:55.193 { 00:19:55.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.193 "dma_device_type": 2 00:19:55.193 } 00:19:55.193 ], 00:19:55.193 "driver_specific": {} 00:19:55.193 } 00:19:55.193 ] 00:19:55.193 00:39:28 -- common/autotest_common.sh@893 -- # return 0 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.193 00:39:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.452 00:39:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:55.452 "name": "Existed_Raid", 00:19:55.452 "uuid": "5ad29e24-2191-46e7-b5ce-2889811d98cc", 00:19:55.452 "strip_size_kb": 64, 00:19:55.452 "state": "configuring", 00:19:55.452 "raid_level": "concat", 00:19:55.452 "superblock": true, 00:19:55.452 "num_base_bdevs": 4, 00:19:55.452 "num_base_bdevs_discovered": 1, 00:19:55.452 "num_base_bdevs_operational": 4, 00:19:55.452 "base_bdevs_list": [ 00:19:55.452 { 00:19:55.452 "name": "BaseBdev1", 00:19:55.452 "uuid": "d64ca42a-0da3-490e-8785-20196eb65e0a", 00:19:55.452 "is_configured": true, 00:19:55.452 "data_offset": 2048, 00:19:55.452 "data_size": 63488 00:19:55.452 }, 00:19:55.452 { 00:19:55.452 "name": "BaseBdev2", 00:19:55.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.452 "is_configured": false, 00:19:55.452 "data_offset": 0, 00:19:55.452 "data_size": 0 00:19:55.452 }, 00:19:55.452 { 00:19:55.452 "name": "BaseBdev3", 00:19:55.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.452 "is_configured": false, 00:19:55.452 "data_offset": 0, 00:19:55.452 "data_size": 0 00:19:55.452 }, 00:19:55.452 { 00:19:55.452 "name": "BaseBdev4", 00:19:55.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.452 "is_configured": false, 00:19:55.452 "data_offset": 0, 00:19:55.452 "data_size": 0 00:19:55.452 } 00:19:55.452 ] 00:19:55.452 }' 00:19:55.452 00:39:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:55.452 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:19:56.048 00:39:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:56.314 [2024-04-27 00:39:29.708253] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:56.314 [2024-04-27 00:39:29.708345] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:19:56.314 00:39:29 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:56.314 00:39:29 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:56.573 00:39:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:56.831 BaseBdev1 00:19:56.831 00:39:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:56.831 00:39:30 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:56.831 00:39:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:56.831 00:39:30 -- common/autotest_common.sh@887 -- # local i 00:19:56.831 00:39:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:56.831 00:39:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:56.831 00:39:30 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.090 00:39:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:57.348 [ 00:19:57.349 { 00:19:57.349 "name": "BaseBdev1", 00:19:57.349 "aliases": [ 00:19:57.349 "f2c0d983-ba49-41f2-9d2a-b23c9f00f5a7" 00:19:57.349 ], 00:19:57.349 "product_name": "Malloc disk", 00:19:57.349 "block_size": 512, 00:19:57.349 "num_blocks": 65536, 00:19:57.349 "uuid": "f2c0d983-ba49-41f2-9d2a-b23c9f00f5a7", 00:19:57.349 "assigned_rate_limits": { 00:19:57.349 "rw_ios_per_sec": 0, 00:19:57.349 "rw_mbytes_per_sec": 0, 00:19:57.349 "r_mbytes_per_sec": 0, 00:19:57.349 "w_mbytes_per_sec": 0 00:19:57.349 }, 00:19:57.349 "claimed": false, 00:19:57.349 "zoned": false, 00:19:57.349 "supported_io_types": { 00:19:57.349 "read": true, 00:19:57.349 "write": true, 00:19:57.349 "unmap": true, 00:19:57.349 "write_zeroes": true, 00:19:57.349 "flush": true, 00:19:57.349 "reset": true, 00:19:57.349 "compare": false, 00:19:57.349 "compare_and_write": false, 00:19:57.349 "abort": true, 00:19:57.349 "nvme_admin": false, 00:19:57.349 "nvme_io": false 00:19:57.349 }, 00:19:57.349 "memory_domains": [ 00:19:57.349 { 00:19:57.349 "dma_device_id": "system", 00:19:57.349 "dma_device_type": 1 00:19:57.349 }, 00:19:57.349 { 00:19:57.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.349 "dma_device_type": 2 00:19:57.349 } 00:19:57.349 ], 00:19:57.349 "driver_specific": {} 00:19:57.349 } 00:19:57.349 ] 00:19:57.349 00:39:30 -- common/autotest_common.sh@893 -- # return 0 00:19:57.349 00:39:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:57.607 [2024-04-27 00:39:30.954887] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.607 [2024-04-27 00:39:30.956918] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.607 [2024-04-27 00:39:30.957012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.607 [2024-04-27 00:39:30.957042] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:57.607 [2024-04-27 00:39:30.957067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:57.607 [2024-04-27 00:39:30.957076] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:57.607 [2024-04-27 00:39:30.957093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:57.607 00:39:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:57.607 00:39:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:57.607 00:39:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:57.607 00:39:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:57.607 00:39:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:57.607 00:39:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:57.607 00:39:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:57.608 00:39:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:57.608 00:39:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.608 00:39:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.608 00:39:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.608 00:39:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.608 00:39:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.608 00:39:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.608 00:39:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.608 "name": "Existed_Raid", 00:19:57.608 "uuid": "e05c7c50-30f8-406d-a951-b7b82a7da23b", 00:19:57.608 "strip_size_kb": 64, 00:19:57.608 "state": "configuring", 00:19:57.608 "raid_level": "concat", 00:19:57.608 "superblock": true, 00:19:57.608 "num_base_bdevs": 4, 00:19:57.608 "num_base_bdevs_discovered": 1, 00:19:57.608 "num_base_bdevs_operational": 4, 00:19:57.608 "base_bdevs_list": [ 00:19:57.608 { 00:19:57.608 "name": "BaseBdev1", 00:19:57.608 "uuid": "f2c0d983-ba49-41f2-9d2a-b23c9f00f5a7", 00:19:57.608 "is_configured": true, 00:19:57.608 "data_offset": 2048, 00:19:57.608 "data_size": 63488 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "BaseBdev2", 00:19:57.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.608 "is_configured": false, 00:19:57.608 "data_offset": 0, 00:19:57.608 "data_size": 0 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "BaseBdev3", 00:19:57.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.608 "is_configured": false, 00:19:57.608 "data_offset": 0, 00:19:57.608 "data_size": 0 00:19:57.608 }, 00:19:57.608 { 00:19:57.608 "name": "BaseBdev4", 00:19:57.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.608 "is_configured": false, 00:19:57.608 "data_offset": 0, 00:19:57.608 "data_size": 0 00:19:57.608 } 00:19:57.608 ] 00:19:57.608 }' 00:19:57.608 00:39:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.608 00:39:31 -- common/autotest_common.sh@10 -- # set +x 00:19:58.544 00:39:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:58.544 [2024-04-27 00:39:32.028239] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:58.544 BaseBdev2 00:19:58.544 00:39:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:58.544 00:39:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:58.544 00:39:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:58.544 00:39:32 -- common/autotest_common.sh@887 -- # local i 00:19:58.544 00:39:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:58.544 00:39:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:58.544 00:39:32 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.802 00:39:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:59.062 [ 00:19:59.062 { 00:19:59.062 "name": "BaseBdev2", 00:19:59.062 "aliases": [ 00:19:59.062 "297e9d30-4dc5-4f27-8efa-396a9e68ffaa" 00:19:59.062 ], 00:19:59.062 "product_name": "Malloc disk", 00:19:59.062 "block_size": 512, 00:19:59.062 "num_blocks": 65536, 00:19:59.062 "uuid": "297e9d30-4dc5-4f27-8efa-396a9e68ffaa", 00:19:59.062 "assigned_rate_limits": { 00:19:59.062 "rw_ios_per_sec": 0, 00:19:59.062 "rw_mbytes_per_sec": 0, 00:19:59.062 "r_mbytes_per_sec": 0, 00:19:59.062 "w_mbytes_per_sec": 0 00:19:59.062 }, 00:19:59.062 "claimed": true, 00:19:59.062 "claim_type": "exclusive_write", 00:19:59.062 "zoned": false, 00:19:59.062 "supported_io_types": { 00:19:59.062 "read": true, 00:19:59.062 "write": true, 00:19:59.062 "unmap": true, 00:19:59.062 "write_zeroes": true, 00:19:59.062 "flush": true, 00:19:59.062 "reset": true, 00:19:59.062 "compare": false, 00:19:59.062 "compare_and_write": false, 00:19:59.062 "abort": true, 00:19:59.062 "nvme_admin": false, 00:19:59.062 "nvme_io": false 00:19:59.062 }, 00:19:59.062 "memory_domains": [ 00:19:59.062 { 00:19:59.062 "dma_device_id": "system", 00:19:59.062 "dma_device_type": 1 00:19:59.062 }, 00:19:59.062 { 00:19:59.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.062 "dma_device_type": 2 00:19:59.062 } 00:19:59.062 ], 00:19:59.062 "driver_specific": {} 00:19:59.062 } 00:19:59.062 ] 00:19:59.062 00:39:32 -- common/autotest_common.sh@893 -- # return 0 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.062 00:39:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.320 00:39:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.320 "name": "Existed_Raid", 00:19:59.320 "uuid": "e05c7c50-30f8-406d-a951-b7b82a7da23b", 00:19:59.320 "strip_size_kb": 64, 00:19:59.320 "state": "configuring", 00:19:59.320 "raid_level": "concat", 00:19:59.320 "superblock": true, 00:19:59.320 "num_base_bdevs": 4, 00:19:59.320 "num_base_bdevs_discovered": 2, 00:19:59.320 "num_base_bdevs_operational": 4, 00:19:59.320 "base_bdevs_list": [ 00:19:59.320 { 00:19:59.320 "name": "BaseBdev1", 00:19:59.320 "uuid": "f2c0d983-ba49-41f2-9d2a-b23c9f00f5a7", 00:19:59.320 "is_configured": true, 00:19:59.320 "data_offset": 2048, 00:19:59.320 "data_size": 63488 00:19:59.320 }, 00:19:59.320 { 00:19:59.320 "name": "BaseBdev2", 00:19:59.320 "uuid": "297e9d30-4dc5-4f27-8efa-396a9e68ffaa", 00:19:59.320 "is_configured": true, 00:19:59.320 "data_offset": 2048, 00:19:59.320 "data_size": 63488 00:19:59.321 }, 00:19:59.321 { 00:19:59.321 "name": "BaseBdev3", 00:19:59.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.321 "is_configured": false, 00:19:59.321 "data_offset": 0, 00:19:59.321 "data_size": 0 00:19:59.321 }, 00:19:59.321 { 00:19:59.321 "name": "BaseBdev4", 00:19:59.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.321 "is_configured": false, 00:19:59.321 "data_offset": 0, 00:19:59.321 "data_size": 0 00:19:59.321 } 00:19:59.321 ] 00:19:59.321 }' 00:19:59.321 00:39:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.321 00:39:32 -- common/autotest_common.sh@10 -- # set +x 00:19:59.887 00:39:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:00.145 [2024-04-27 00:39:33.576317] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.145 BaseBdev3 00:20:00.145 00:39:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:00.145 00:39:33 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:20:00.145 00:39:33 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:00.145 00:39:33 -- common/autotest_common.sh@887 -- # local i 00:20:00.145 00:39:33 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:00.145 00:39:33 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:00.145 00:39:33 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.404 00:39:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:00.662 [ 00:20:00.662 { 00:20:00.662 "name": "BaseBdev3", 00:20:00.662 "aliases": [ 00:20:00.662 "5502183c-4c87-4977-ad98-6ae4cff3857a" 00:20:00.662 ], 00:20:00.662 "product_name": "Malloc disk", 00:20:00.662 "block_size": 512, 00:20:00.662 "num_blocks": 65536, 00:20:00.662 "uuid": "5502183c-4c87-4977-ad98-6ae4cff3857a", 00:20:00.662 "assigned_rate_limits": { 00:20:00.662 "rw_ios_per_sec": 0, 00:20:00.662 "rw_mbytes_per_sec": 0, 00:20:00.662 "r_mbytes_per_sec": 0, 00:20:00.662 "w_mbytes_per_sec": 0 00:20:00.662 }, 00:20:00.662 "claimed": true, 00:20:00.662 "claim_type": "exclusive_write", 00:20:00.662 "zoned": false, 00:20:00.662 "supported_io_types": { 00:20:00.662 "read": true, 00:20:00.662 "write": true, 00:20:00.662 "unmap": true, 00:20:00.662 "write_zeroes": true, 00:20:00.662 "flush": true, 00:20:00.662 "reset": true, 00:20:00.662 "compare": false, 00:20:00.662 "compare_and_write": false, 00:20:00.662 "abort": true, 00:20:00.662 "nvme_admin": false, 00:20:00.662 "nvme_io": false 00:20:00.662 }, 00:20:00.662 "memory_domains": [ 00:20:00.662 { 00:20:00.662 "dma_device_id": "system", 00:20:00.662 "dma_device_type": 1 00:20:00.662 }, 00:20:00.662 { 00:20:00.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.662 "dma_device_type": 2 00:20:00.662 } 00:20:00.662 ], 00:20:00.662 "driver_specific": {} 00:20:00.662 } 00:20:00.662 ] 00:20:00.662 00:39:34 -- common/autotest_common.sh@893 -- # return 0 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.662 00:39:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.921 00:39:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:00.921 "name": "Existed_Raid", 00:20:00.921 "uuid": "e05c7c50-30f8-406d-a951-b7b82a7da23b", 00:20:00.921 "strip_size_kb": 64, 00:20:00.921 "state": "configuring", 00:20:00.921 "raid_level": "concat", 00:20:00.921 "superblock": true, 00:20:00.921 "num_base_bdevs": 4, 00:20:00.921 "num_base_bdevs_discovered": 3, 00:20:00.921 "num_base_bdevs_operational": 4, 00:20:00.921 "base_bdevs_list": [ 00:20:00.921 { 00:20:00.921 "name": "BaseBdev1", 00:20:00.921 "uuid": "f2c0d983-ba49-41f2-9d2a-b23c9f00f5a7", 00:20:00.921 "is_configured": true, 00:20:00.921 "data_offset": 2048, 00:20:00.921 "data_size": 63488 00:20:00.921 }, 00:20:00.921 { 00:20:00.921 "name": "BaseBdev2", 00:20:00.921 "uuid": "297e9d30-4dc5-4f27-8efa-396a9e68ffaa", 00:20:00.921 "is_configured": true, 00:20:00.921 "data_offset": 2048, 00:20:00.921 "data_size": 63488 00:20:00.921 }, 00:20:00.921 { 00:20:00.921 "name": "BaseBdev3", 00:20:00.921 "uuid": "5502183c-4c87-4977-ad98-6ae4cff3857a", 00:20:00.921 "is_configured": true, 00:20:00.921 "data_offset": 2048, 00:20:00.921 "data_size": 63488 00:20:00.921 }, 00:20:00.921 { 00:20:00.921 "name": "BaseBdev4", 00:20:00.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.921 "is_configured": false, 00:20:00.921 "data_offset": 0, 00:20:00.921 "data_size": 0 00:20:00.921 } 00:20:00.921 ] 00:20:00.921 }' 00:20:00.921 00:39:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:00.921 00:39:34 -- common/autotest_common.sh@10 -- # set +x 00:20:01.487 00:39:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:01.745 [2024-04-27 00:39:35.242461] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:01.745 [2024-04-27 00:39:35.242705] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:01.745 [2024-04-27 00:39:35.242719] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:01.745 [2024-04-27 00:39:35.242922] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:01.745 [2024-04-27 00:39:35.243268] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:01.745 [2024-04-27 00:39:35.243293] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:20:01.745 [2024-04-27 00:39:35.243484] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.745 BaseBdev4 00:20:01.745 00:39:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:01.745 00:39:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:20:01.745 00:39:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:01.745 00:39:35 -- common/autotest_common.sh@887 -- # local i 00:20:01.745 00:39:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:01.745 00:39:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:01.745 00:39:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:02.004 00:39:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:02.267 [ 00:20:02.267 { 00:20:02.267 "name": "BaseBdev4", 00:20:02.267 "aliases": [ 00:20:02.267 "ca105860-a89a-4b21-bfd7-2228f17b2d6e" 00:20:02.267 ], 00:20:02.267 "product_name": "Malloc disk", 00:20:02.267 "block_size": 512, 00:20:02.267 "num_blocks": 65536, 00:20:02.267 "uuid": "ca105860-a89a-4b21-bfd7-2228f17b2d6e", 00:20:02.267 "assigned_rate_limits": { 00:20:02.267 "rw_ios_per_sec": 0, 00:20:02.267 "rw_mbytes_per_sec": 0, 00:20:02.267 "r_mbytes_per_sec": 0, 00:20:02.267 "w_mbytes_per_sec": 0 00:20:02.267 }, 00:20:02.267 "claimed": true, 00:20:02.267 "claim_type": "exclusive_write", 00:20:02.267 "zoned": false, 00:20:02.267 "supported_io_types": { 00:20:02.267 "read": true, 00:20:02.267 "write": true, 00:20:02.267 "unmap": true, 00:20:02.267 "write_zeroes": true, 00:20:02.267 "flush": true, 00:20:02.267 "reset": true, 00:20:02.267 "compare": false, 00:20:02.267 "compare_and_write": false, 00:20:02.267 "abort": true, 00:20:02.267 "nvme_admin": false, 00:20:02.267 "nvme_io": false 00:20:02.267 }, 00:20:02.267 "memory_domains": [ 00:20:02.267 { 00:20:02.267 "dma_device_id": "system", 00:20:02.267 "dma_device_type": 1 00:20:02.267 }, 00:20:02.267 { 00:20:02.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.267 "dma_device_type": 2 00:20:02.267 } 00:20:02.267 ], 00:20:02.267 "driver_specific": {} 00:20:02.267 } 00:20:02.267 ] 00:20:02.267 00:39:35 -- common/autotest_common.sh@893 -- # return 0 00:20:02.267 00:39:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:02.267 00:39:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:02.267 00:39:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:02.267 00:39:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:02.267 00:39:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.268 00:39:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.539 00:39:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.539 "name": "Existed_Raid", 00:20:02.539 "uuid": "e05c7c50-30f8-406d-a951-b7b82a7da23b", 00:20:02.539 "strip_size_kb": 64, 00:20:02.539 "state": "online", 00:20:02.539 "raid_level": "concat", 00:20:02.539 "superblock": true, 00:20:02.539 "num_base_bdevs": 4, 00:20:02.539 "num_base_bdevs_discovered": 4, 00:20:02.539 "num_base_bdevs_operational": 4, 00:20:02.539 "base_bdevs_list": [ 00:20:02.539 { 00:20:02.539 "name": "BaseBdev1", 00:20:02.539 "uuid": "f2c0d983-ba49-41f2-9d2a-b23c9f00f5a7", 00:20:02.539 "is_configured": true, 00:20:02.539 "data_offset": 2048, 00:20:02.539 "data_size": 63488 00:20:02.539 }, 00:20:02.539 { 00:20:02.539 "name": "BaseBdev2", 00:20:02.539 "uuid": "297e9d30-4dc5-4f27-8efa-396a9e68ffaa", 00:20:02.539 "is_configured": true, 00:20:02.539 "data_offset": 2048, 00:20:02.539 "data_size": 63488 00:20:02.539 }, 00:20:02.539 { 00:20:02.539 "name": "BaseBdev3", 00:20:02.539 "uuid": "5502183c-4c87-4977-ad98-6ae4cff3857a", 00:20:02.539 "is_configured": true, 00:20:02.539 "data_offset": 2048, 00:20:02.539 "data_size": 63488 00:20:02.539 }, 00:20:02.539 { 00:20:02.539 "name": "BaseBdev4", 00:20:02.539 "uuid": "ca105860-a89a-4b21-bfd7-2228f17b2d6e", 00:20:02.539 "is_configured": true, 00:20:02.539 "data_offset": 2048, 00:20:02.539 "data_size": 63488 00:20:02.539 } 00:20:02.539 ] 00:20:02.539 }' 00:20:02.540 00:39:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.540 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:20:03.477 00:39:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:03.477 [2024-04-27 00:39:36.943092] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:03.477 [2024-04-27 00:39:36.943131] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.477 [2024-04-27 00:39:36.943238] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.477 00:39:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.044 00:39:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:04.044 "name": "Existed_Raid", 00:20:04.044 "uuid": "e05c7c50-30f8-406d-a951-b7b82a7da23b", 00:20:04.044 "strip_size_kb": 64, 00:20:04.044 "state": "offline", 00:20:04.044 "raid_level": "concat", 00:20:04.044 "superblock": true, 00:20:04.044 "num_base_bdevs": 4, 00:20:04.044 "num_base_bdevs_discovered": 3, 00:20:04.044 "num_base_bdevs_operational": 3, 00:20:04.044 "base_bdevs_list": [ 00:20:04.044 { 00:20:04.044 "name": null, 00:20:04.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.044 "is_configured": false, 00:20:04.044 "data_offset": 2048, 00:20:04.044 "data_size": 63488 00:20:04.044 }, 00:20:04.044 { 00:20:04.044 "name": "BaseBdev2", 00:20:04.044 "uuid": "297e9d30-4dc5-4f27-8efa-396a9e68ffaa", 00:20:04.044 "is_configured": true, 00:20:04.044 "data_offset": 2048, 00:20:04.044 "data_size": 63488 00:20:04.044 }, 00:20:04.044 { 00:20:04.044 "name": "BaseBdev3", 00:20:04.044 "uuid": "5502183c-4c87-4977-ad98-6ae4cff3857a", 00:20:04.044 "is_configured": true, 00:20:04.044 "data_offset": 2048, 00:20:04.044 "data_size": 63488 00:20:04.044 }, 00:20:04.044 { 00:20:04.044 "name": "BaseBdev4", 00:20:04.044 "uuid": "ca105860-a89a-4b21-bfd7-2228f17b2d6e", 00:20:04.044 "is_configured": true, 00:20:04.044 "data_offset": 2048, 00:20:04.044 "data_size": 63488 00:20:04.044 } 00:20:04.044 ] 00:20:04.044 }' 00:20:04.044 00:39:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:04.044 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:20:04.611 00:39:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:04.611 00:39:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:04.611 00:39:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.611 00:39:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:04.611 00:39:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:04.611 00:39:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:04.611 00:39:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:04.870 [2024-04-27 00:39:38.435417] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:05.129 00:39:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:05.129 00:39:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:05.129 00:39:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.129 00:39:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:05.388 00:39:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:05.388 00:39:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.388 00:39:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:05.388 [2024-04-27 00:39:38.946536] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:05.647 00:39:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:05.647 00:39:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:05.647 00:39:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.647 00:39:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:05.905 00:39:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:05.905 00:39:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.905 00:39:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:05.905 [2024-04-27 00:39:39.492630] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:05.905 [2024-04-27 00:39:39.492701] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:20:06.164 00:39:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:06.164 00:39:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:06.164 00:39:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.164 00:39:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:06.424 00:39:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:06.424 00:39:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:06.424 00:39:39 -- bdev/bdev_raid.sh@287 -- # killprocess 127771 00:20:06.424 00:39:39 -- common/autotest_common.sh@936 -- # '[' -z 127771 ']' 00:20:06.424 00:39:39 -- common/autotest_common.sh@940 -- # kill -0 127771 00:20:06.424 00:39:39 -- common/autotest_common.sh@941 -- # uname 00:20:06.424 00:39:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.424 00:39:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 127771 00:20:06.424 killing process with pid 127771 00:20:06.424 00:39:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:06.424 00:39:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:06.424 00:39:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 127771' 00:20:06.424 00:39:39 -- common/autotest_common.sh@955 -- # kill 127771 00:20:06.424 [2024-04-27 00:39:39.832675] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.424 00:39:39 -- common/autotest_common.sh@960 -- # wait 127771 00:20:06.424 [2024-04-27 00:39:39.832779] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.361 ************************************ 00:20:07.361 END TEST raid_state_function_test_sb 00:20:07.361 ************************************ 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:07.361 00:20:07.361 real 0m15.479s 00:20:07.361 user 0m27.602s 00:20:07.361 sys 0m1.837s 00:20:07.361 00:39:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:07.361 00:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:20:07.361 00:39:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:07.361 00:39:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:07.361 00:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:07.361 ************************************ 00:20:07.361 START TEST raid_superblock_test 00:20:07.361 ************************************ 00:20:07.361 00:39:40 -- common/autotest_common.sh@1111 -- # raid_superblock_test concat 4 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@357 -- # raid_pid=128230 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:07.361 00:39:40 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128230 /var/tmp/spdk-raid.sock 00:20:07.361 00:39:40 -- common/autotest_common.sh@817 -- # '[' -z 128230 ']' 00:20:07.361 00:39:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:07.361 00:39:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:07.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:07.361 00:39:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:07.361 00:39:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:07.361 00:39:40 -- common/autotest_common.sh@10 -- # set +x 00:20:07.620 [2024-04-27 00:39:40.986592] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:07.620 [2024-04-27 00:39:40.986981] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128230 ] 00:20:07.620 [2024-04-27 00:39:41.141302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.879 [2024-04-27 00:39:41.324091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.138 [2024-04-27 00:39:41.503982] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.397 00:39:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:08.397 00:39:41 -- common/autotest_common.sh@850 -- # return 0 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.397 00:39:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:08.656 malloc1 00:20:08.656 00:39:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:08.914 [2024-04-27 00:39:42.451555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:08.914 [2024-04-27 00:39:42.451827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.914 [2024-04-27 00:39:42.451908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:08.914 [2024-04-27 00:39:42.452156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.914 [2024-04-27 00:39:42.454976] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.914 [2024-04-27 00:39:42.455178] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:08.914 pt1 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:08.914 00:39:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:09.176 malloc2 00:20:09.176 00:39:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:09.440 [2024-04-27 00:39:42.981263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:09.440 [2024-04-27 00:39:42.981540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.440 [2024-04-27 00:39:42.981764] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:09.440 [2024-04-27 00:39:42.981933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.440 [2024-04-27 00:39:42.984481] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.440 [2024-04-27 00:39:42.984657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:09.440 pt2 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:09.440 00:39:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:10.007 malloc3 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:10.007 [2024-04-27 00:39:43.530240] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:10.007 [2024-04-27 00:39:43.530516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.007 [2024-04-27 00:39:43.530712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:10.007 [2024-04-27 00:39:43.530914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.007 [2024-04-27 00:39:43.533519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.007 [2024-04-27 00:39:43.533701] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:10.007 pt3 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:10.007 00:39:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:10.266 malloc4 00:20:10.266 00:39:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:10.524 [2024-04-27 00:39:43.985539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:10.524 [2024-04-27 00:39:43.985779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.524 [2024-04-27 00:39:43.985944] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:10.524 [2024-04-27 00:39:43.986087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.524 [2024-04-27 00:39:43.988575] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.524 [2024-04-27 00:39:43.988754] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:10.524 pt4 00:20:10.524 00:39:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:10.524 00:39:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:10.524 00:39:43 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:10.782 [2024-04-27 00:39:44.181593] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.782 [2024-04-27 00:39:44.183842] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:10.782 [2024-04-27 00:39:44.184075] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:10.782 [2024-04-27 00:39:44.184279] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:10.782 [2024-04-27 00:39:44.184624] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:20:10.782 [2024-04-27 00:39:44.184797] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:10.782 [2024-04-27 00:39:44.184943] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:10.782 [2024-04-27 00:39:44.185414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:20:10.782 [2024-04-27 00:39:44.185540] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:20:10.782 [2024-04-27 00:39:44.185811] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.782 00:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.040 00:39:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.040 "name": "raid_bdev1", 00:20:11.040 "uuid": "a9596a05-1f7f-4867-a4cc-8b055773b63c", 00:20:11.040 "strip_size_kb": 64, 00:20:11.040 "state": "online", 00:20:11.040 "raid_level": "concat", 00:20:11.040 "superblock": true, 00:20:11.040 "num_base_bdevs": 4, 00:20:11.040 "num_base_bdevs_discovered": 4, 00:20:11.040 "num_base_bdevs_operational": 4, 00:20:11.040 "base_bdevs_list": [ 00:20:11.040 { 00:20:11.040 "name": "pt1", 00:20:11.040 "uuid": "b6398773-9296-5e54-95b4-22a62b3e4ff7", 00:20:11.040 "is_configured": true, 00:20:11.040 "data_offset": 2048, 00:20:11.040 "data_size": 63488 00:20:11.040 }, 00:20:11.040 { 00:20:11.040 "name": "pt2", 00:20:11.040 "uuid": "9932bd23-d885-57d9-8aa1-b830b9c27eb8", 00:20:11.040 "is_configured": true, 00:20:11.040 "data_offset": 2048, 00:20:11.040 "data_size": 63488 00:20:11.040 }, 00:20:11.040 { 00:20:11.040 "name": "pt3", 00:20:11.040 "uuid": "efec95ea-6b57-52a0-9860-0526af4d622e", 00:20:11.040 "is_configured": true, 00:20:11.040 "data_offset": 2048, 00:20:11.040 "data_size": 63488 00:20:11.040 }, 00:20:11.040 { 00:20:11.040 "name": "pt4", 00:20:11.040 "uuid": "bcde4026-7382-5cc4-8a72-9b9f5b8c6ccb", 00:20:11.040 "is_configured": true, 00:20:11.040 "data_offset": 2048, 00:20:11.040 "data_size": 63488 00:20:11.040 } 00:20:11.040 ] 00:20:11.040 }' 00:20:11.040 00:39:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.040 00:39:44 -- common/autotest_common.sh@10 -- # set +x 00:20:11.607 00:39:45 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:11.607 00:39:45 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:11.866 [2024-04-27 00:39:45.358305] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.866 00:39:45 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=a9596a05-1f7f-4867-a4cc-8b055773b63c 00:20:11.866 00:39:45 -- bdev/bdev_raid.sh@380 -- # '[' -z a9596a05-1f7f-4867-a4cc-8b055773b63c ']' 00:20:11.866 00:39:45 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:12.125 [2024-04-27 00:39:45.582034] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:12.125 [2024-04-27 00:39:45.582231] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.125 [2024-04-27 00:39:45.582471] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.125 [2024-04-27 00:39:45.582678] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.125 [2024-04-27 00:39:45.582836] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:20:12.125 00:39:45 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.125 00:39:45 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:12.383 00:39:45 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:12.383 00:39:45 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:12.383 00:39:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:12.383 00:39:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:12.641 00:39:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:12.641 00:39:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:12.900 00:39:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:12.900 00:39:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:13.158 00:39:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:13.158 00:39:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:13.417 00:39:46 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:13.417 00:39:46 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:13.676 00:39:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:13.676 00:39:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:13.676 00:39:47 -- common/autotest_common.sh@638 -- # local es=0 00:20:13.676 00:39:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:13.676 00:39:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.676 00:39:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:13.676 00:39:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.676 00:39:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:13.676 00:39:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.676 00:39:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:13.676 00:39:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.676 00:39:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:13.676 00:39:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:13.676 [2024-04-27 00:39:47.238337] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:13.676 [2024-04-27 00:39:47.240551] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:13.676 [2024-04-27 00:39:47.240722] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:13.676 [2024-04-27 00:39:47.240808] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:13.676 [2024-04-27 00:39:47.240905] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:13.676 [2024-04-27 00:39:47.241108] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:13.676 [2024-04-27 00:39:47.241261] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:13.676 [2024-04-27 00:39:47.241420] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:13.676 [2024-04-27 00:39:47.241543] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.676 [2024-04-27 00:39:47.241652] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:20:13.676 request: 00:20:13.676 { 00:20:13.676 "name": "raid_bdev1", 00:20:13.676 "raid_level": "concat", 00:20:13.676 "base_bdevs": [ 00:20:13.676 "malloc1", 00:20:13.676 "malloc2", 00:20:13.676 "malloc3", 00:20:13.676 "malloc4" 00:20:13.676 ], 00:20:13.676 "superblock": false, 00:20:13.676 "strip_size_kb": 64, 00:20:13.676 "method": "bdev_raid_create", 00:20:13.676 "req_id": 1 00:20:13.676 } 00:20:13.676 Got JSON-RPC error response 00:20:13.676 response: 00:20:13.676 { 00:20:13.676 "code": -17, 00:20:13.676 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:13.676 } 00:20:13.676 00:39:47 -- common/autotest_common.sh@641 -- # es=1 00:20:13.676 00:39:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:13.676 00:39:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:13.676 00:39:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:13.676 00:39:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.676 00:39:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:13.935 00:39:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:13.935 00:39:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:13.935 00:39:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:14.194 [2024-04-27 00:39:47.710422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:14.194 [2024-04-27 00:39:47.710696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.194 [2024-04-27 00:39:47.710887] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:14.194 [2024-04-27 00:39:47.711023] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.194 [2024-04-27 00:39:47.713366] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.194 [2024-04-27 00:39:47.713561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:14.194 [2024-04-27 00:39:47.713832] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:14.194 [2024-04-27 00:39:47.714042] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:14.194 pt1 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.194 00:39:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.452 00:39:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.452 "name": "raid_bdev1", 00:20:14.452 "uuid": "a9596a05-1f7f-4867-a4cc-8b055773b63c", 00:20:14.452 "strip_size_kb": 64, 00:20:14.452 "state": "configuring", 00:20:14.452 "raid_level": "concat", 00:20:14.452 "superblock": true, 00:20:14.452 "num_base_bdevs": 4, 00:20:14.452 "num_base_bdevs_discovered": 1, 00:20:14.452 "num_base_bdevs_operational": 4, 00:20:14.452 "base_bdevs_list": [ 00:20:14.452 { 00:20:14.452 "name": "pt1", 00:20:14.452 "uuid": "b6398773-9296-5e54-95b4-22a62b3e4ff7", 00:20:14.452 "is_configured": true, 00:20:14.452 "data_offset": 2048, 00:20:14.452 "data_size": 63488 00:20:14.452 }, 00:20:14.452 { 00:20:14.452 "name": null, 00:20:14.452 "uuid": "9932bd23-d885-57d9-8aa1-b830b9c27eb8", 00:20:14.452 "is_configured": false, 00:20:14.452 "data_offset": 2048, 00:20:14.452 "data_size": 63488 00:20:14.452 }, 00:20:14.452 { 00:20:14.452 "name": null, 00:20:14.452 "uuid": "efec95ea-6b57-52a0-9860-0526af4d622e", 00:20:14.452 "is_configured": false, 00:20:14.452 "data_offset": 2048, 00:20:14.452 "data_size": 63488 00:20:14.452 }, 00:20:14.452 { 00:20:14.452 "name": null, 00:20:14.452 "uuid": "bcde4026-7382-5cc4-8a72-9b9f5b8c6ccb", 00:20:14.452 "is_configured": false, 00:20:14.452 "data_offset": 2048, 00:20:14.452 "data_size": 63488 00:20:14.452 } 00:20:14.452 ] 00:20:14.452 }' 00:20:14.452 00:39:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.452 00:39:47 -- common/autotest_common.sh@10 -- # set +x 00:20:15.019 00:39:48 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:15.019 00:39:48 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:15.278 [2024-04-27 00:39:48.794753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:15.278 [2024-04-27 00:39:48.795017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.278 [2024-04-27 00:39:48.795184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:15.278 [2024-04-27 00:39:48.795341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.278 [2024-04-27 00:39:48.795895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.278 [2024-04-27 00:39:48.796105] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:15.278 [2024-04-27 00:39:48.796336] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:15.278 [2024-04-27 00:39:48.796462] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:15.278 pt2 00:20:15.278 00:39:48 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:15.537 [2024-04-27 00:39:49.018821] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.537 00:39:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.795 00:39:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.795 "name": "raid_bdev1", 00:20:15.795 "uuid": "a9596a05-1f7f-4867-a4cc-8b055773b63c", 00:20:15.795 "strip_size_kb": 64, 00:20:15.795 "state": "configuring", 00:20:15.795 "raid_level": "concat", 00:20:15.795 "superblock": true, 00:20:15.795 "num_base_bdevs": 4, 00:20:15.795 "num_base_bdevs_discovered": 1, 00:20:15.795 "num_base_bdevs_operational": 4, 00:20:15.795 "base_bdevs_list": [ 00:20:15.795 { 00:20:15.795 "name": "pt1", 00:20:15.795 "uuid": "b6398773-9296-5e54-95b4-22a62b3e4ff7", 00:20:15.795 "is_configured": true, 00:20:15.795 "data_offset": 2048, 00:20:15.795 "data_size": 63488 00:20:15.795 }, 00:20:15.795 { 00:20:15.795 "name": null, 00:20:15.795 "uuid": "9932bd23-d885-57d9-8aa1-b830b9c27eb8", 00:20:15.795 "is_configured": false, 00:20:15.795 "data_offset": 2048, 00:20:15.795 "data_size": 63488 00:20:15.795 }, 00:20:15.795 { 00:20:15.795 "name": null, 00:20:15.795 "uuid": "efec95ea-6b57-52a0-9860-0526af4d622e", 00:20:15.795 "is_configured": false, 00:20:15.795 "data_offset": 2048, 00:20:15.795 "data_size": 63488 00:20:15.795 }, 00:20:15.795 { 00:20:15.795 "name": null, 00:20:15.795 "uuid": "bcde4026-7382-5cc4-8a72-9b9f5b8c6ccb", 00:20:15.795 "is_configured": false, 00:20:15.795 "data_offset": 2048, 00:20:15.795 "data_size": 63488 00:20:15.795 } 00:20:15.795 ] 00:20:15.795 }' 00:20:15.795 00:39:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.795 00:39:49 -- common/autotest_common.sh@10 -- # set +x 00:20:16.391 00:39:49 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:16.391 00:39:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:16.391 00:39:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:16.648 [2024-04-27 00:39:50.171226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:16.648 [2024-04-27 00:39:50.171569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.648 [2024-04-27 00:39:50.171732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:16.648 [2024-04-27 00:39:50.171850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.648 [2024-04-27 00:39:50.172489] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.648 [2024-04-27 00:39:50.172703] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:16.648 [2024-04-27 00:39:50.172982] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:16.648 [2024-04-27 00:39:50.173118] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.648 pt2 00:20:16.648 00:39:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:16.648 00:39:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:16.648 00:39:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:16.906 [2024-04-27 00:39:50.383221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:16.906 [2024-04-27 00:39:50.383464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.906 [2024-04-27 00:39:50.383596] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:16.906 [2024-04-27 00:39:50.383719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.906 [2024-04-27 00:39:50.384274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.906 [2024-04-27 00:39:50.384464] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:16.906 [2024-04-27 00:39:50.384680] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:16.906 [2024-04-27 00:39:50.384785] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:16.906 pt3 00:20:16.906 00:39:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:16.906 00:39:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:16.906 00:39:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:17.165 [2024-04-27 00:39:50.655408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:17.165 [2024-04-27 00:39:50.655665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.165 [2024-04-27 00:39:50.655750] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:17.165 [2024-04-27 00:39:50.656015] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.165 [2024-04-27 00:39:50.656645] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.165 [2024-04-27 00:39:50.656835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:17.165 [2024-04-27 00:39:50.657094] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:17.165 [2024-04-27 00:39:50.657221] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:17.165 [2024-04-27 00:39:50.657425] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:17.165 [2024-04-27 00:39:50.657544] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:17.165 [2024-04-27 00:39:50.657764] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:17.165 [2024-04-27 00:39:50.658206] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:17.165 [2024-04-27 00:39:50.658318] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:20:17.165 [2024-04-27 00:39:50.658627] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.165 pt4 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.165 00:39:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.423 00:39:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:17.423 "name": "raid_bdev1", 00:20:17.424 "uuid": "a9596a05-1f7f-4867-a4cc-8b055773b63c", 00:20:17.424 "strip_size_kb": 64, 00:20:17.424 "state": "online", 00:20:17.424 "raid_level": "concat", 00:20:17.424 "superblock": true, 00:20:17.424 "num_base_bdevs": 4, 00:20:17.424 "num_base_bdevs_discovered": 4, 00:20:17.424 "num_base_bdevs_operational": 4, 00:20:17.424 "base_bdevs_list": [ 00:20:17.424 { 00:20:17.424 "name": "pt1", 00:20:17.424 "uuid": "b6398773-9296-5e54-95b4-22a62b3e4ff7", 00:20:17.424 "is_configured": true, 00:20:17.424 "data_offset": 2048, 00:20:17.424 "data_size": 63488 00:20:17.424 }, 00:20:17.424 { 00:20:17.424 "name": "pt2", 00:20:17.424 "uuid": "9932bd23-d885-57d9-8aa1-b830b9c27eb8", 00:20:17.424 "is_configured": true, 00:20:17.424 "data_offset": 2048, 00:20:17.424 "data_size": 63488 00:20:17.424 }, 00:20:17.424 { 00:20:17.424 "name": "pt3", 00:20:17.424 "uuid": "efec95ea-6b57-52a0-9860-0526af4d622e", 00:20:17.424 "is_configured": true, 00:20:17.424 "data_offset": 2048, 00:20:17.424 "data_size": 63488 00:20:17.424 }, 00:20:17.424 { 00:20:17.424 "name": "pt4", 00:20:17.424 "uuid": "bcde4026-7382-5cc4-8a72-9b9f5b8c6ccb", 00:20:17.424 "is_configured": true, 00:20:17.424 "data_offset": 2048, 00:20:17.424 "data_size": 63488 00:20:17.424 } 00:20:17.424 ] 00:20:17.424 }' 00:20:17.424 00:39:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:17.424 00:39:50 -- common/autotest_common.sh@10 -- # set +x 00:20:17.991 00:39:51 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:17.991 00:39:51 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:18.249 [2024-04-27 00:39:51.767900] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.249 00:39:51 -- bdev/bdev_raid.sh@430 -- # '[' a9596a05-1f7f-4867-a4cc-8b055773b63c '!=' a9596a05-1f7f-4867-a4cc-8b055773b63c ']' 00:20:18.249 00:39:51 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:20:18.249 00:39:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:18.249 00:39:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:20:18.249 00:39:51 -- bdev/bdev_raid.sh@511 -- # killprocess 128230 00:20:18.249 00:39:51 -- common/autotest_common.sh@936 -- # '[' -z 128230 ']' 00:20:18.249 00:39:51 -- common/autotest_common.sh@940 -- # kill -0 128230 00:20:18.249 00:39:51 -- common/autotest_common.sh@941 -- # uname 00:20:18.249 00:39:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:18.249 00:39:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128230 00:20:18.249 killing process with pid 128230 00:20:18.249 00:39:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:18.249 00:39:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:18.249 00:39:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128230' 00:20:18.249 00:39:51 -- common/autotest_common.sh@955 -- # kill 128230 00:20:18.249 [2024-04-27 00:39:51.808853] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:18.249 00:39:51 -- common/autotest_common.sh@960 -- # wait 128230 00:20:18.249 [2024-04-27 00:39:51.808921] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.249 [2024-04-27 00:39:51.808988] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.249 [2024-04-27 00:39:51.808998] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:20:18.508 [2024-04-27 00:39:52.070456] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:19.531 ************************************ 00:20:19.531 END TEST raid_superblock_test 00:20:19.531 ************************************ 00:20:19.531 00:39:53 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:19.531 00:20:19.531 real 0m12.106s 00:20:19.531 user 0m21.049s 00:20:19.531 sys 0m1.512s 00:20:19.531 00:39:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:19.531 00:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:19.531 00:39:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:20:19.531 00:39:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:19.531 00:39:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:19.531 00:39:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:19.531 00:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:19.790 ************************************ 00:20:19.790 START TEST raid_state_function_test 00:20:19.790 ************************************ 00:20:19.790 00:39:53 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 false 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=128563 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128563' 00:20:19.790 Process raid pid: 128563 00:20:19.790 00:39:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128563 /var/tmp/spdk-raid.sock 00:20:19.790 00:39:53 -- common/autotest_common.sh@817 -- # '[' -z 128563 ']' 00:20:19.790 00:39:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:19.790 00:39:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:19.790 00:39:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:19.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:19.790 00:39:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:19.790 00:39:53 -- common/autotest_common.sh@10 -- # set +x 00:20:19.790 [2024-04-27 00:39:53.198678] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:19.790 [2024-04-27 00:39:53.199090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.790 [2024-04-27 00:39:53.352411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.049 [2024-04-27 00:39:53.521814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.316 [2024-04-27 00:39:53.692968] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:20.574 00:39:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:20.574 00:39:54 -- common/autotest_common.sh@850 -- # return 0 00:20:20.574 00:39:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:20.832 [2024-04-27 00:39:54.376645] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:20.832 [2024-04-27 00:39:54.376929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:20.832 [2024-04-27 00:39:54.377036] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:20.832 [2024-04-27 00:39:54.377097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:20.832 [2024-04-27 00:39:54.377193] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:20.832 [2024-04-27 00:39:54.377274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:20.833 [2024-04-27 00:39:54.377472] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:20.833 [2024-04-27 00:39:54.377538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.833 00:39:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.091 00:39:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:21.091 "name": "Existed_Raid", 00:20:21.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.091 "strip_size_kb": 0, 00:20:21.091 "state": "configuring", 00:20:21.091 "raid_level": "raid1", 00:20:21.091 "superblock": false, 00:20:21.091 "num_base_bdevs": 4, 00:20:21.091 "num_base_bdevs_discovered": 0, 00:20:21.091 "num_base_bdevs_operational": 4, 00:20:21.091 "base_bdevs_list": [ 00:20:21.091 { 00:20:21.091 "name": "BaseBdev1", 00:20:21.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.091 "is_configured": false, 00:20:21.091 "data_offset": 0, 00:20:21.091 "data_size": 0 00:20:21.091 }, 00:20:21.091 { 00:20:21.091 "name": "BaseBdev2", 00:20:21.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.091 "is_configured": false, 00:20:21.091 "data_offset": 0, 00:20:21.091 "data_size": 0 00:20:21.091 }, 00:20:21.091 { 00:20:21.091 "name": "BaseBdev3", 00:20:21.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.091 "is_configured": false, 00:20:21.091 "data_offset": 0, 00:20:21.091 "data_size": 0 00:20:21.091 }, 00:20:21.091 { 00:20:21.091 "name": "BaseBdev4", 00:20:21.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.091 "is_configured": false, 00:20:21.091 "data_offset": 0, 00:20:21.091 "data_size": 0 00:20:21.091 } 00:20:21.091 ] 00:20:21.091 }' 00:20:21.091 00:39:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:21.091 00:39:54 -- common/autotest_common.sh@10 -- # set +x 00:20:22.021 00:39:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:22.021 [2024-04-27 00:39:55.544730] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.021 [2024-04-27 00:39:55.545012] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:20:22.021 00:39:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:22.278 [2024-04-27 00:39:55.792813] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:22.278 [2024-04-27 00:39:55.793041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:22.278 [2024-04-27 00:39:55.793147] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.278 [2024-04-27 00:39:55.793277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.278 [2024-04-27 00:39:55.793372] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:22.278 [2024-04-27 00:39:55.793528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:22.278 [2024-04-27 00:39:55.793626] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:22.278 [2024-04-27 00:39:55.793688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:22.278 00:39:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:22.534 [2024-04-27 00:39:56.035780] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.534 BaseBdev1 00:20:22.534 00:39:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:22.534 00:39:56 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:22.534 00:39:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:22.534 00:39:56 -- common/autotest_common.sh@887 -- # local i 00:20:22.534 00:39:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:22.534 00:39:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:22.534 00:39:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:22.791 00:39:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:23.049 [ 00:20:23.049 { 00:20:23.049 "name": "BaseBdev1", 00:20:23.049 "aliases": [ 00:20:23.049 "1880db7d-5886-4e6f-b3c0-629a50b5f0ab" 00:20:23.049 ], 00:20:23.049 "product_name": "Malloc disk", 00:20:23.049 "block_size": 512, 00:20:23.049 "num_blocks": 65536, 00:20:23.049 "uuid": "1880db7d-5886-4e6f-b3c0-629a50b5f0ab", 00:20:23.049 "assigned_rate_limits": { 00:20:23.049 "rw_ios_per_sec": 0, 00:20:23.049 "rw_mbytes_per_sec": 0, 00:20:23.049 "r_mbytes_per_sec": 0, 00:20:23.049 "w_mbytes_per_sec": 0 00:20:23.049 }, 00:20:23.049 "claimed": true, 00:20:23.049 "claim_type": "exclusive_write", 00:20:23.049 "zoned": false, 00:20:23.049 "supported_io_types": { 00:20:23.049 "read": true, 00:20:23.049 "write": true, 00:20:23.049 "unmap": true, 00:20:23.049 "write_zeroes": true, 00:20:23.049 "flush": true, 00:20:23.049 "reset": true, 00:20:23.049 "compare": false, 00:20:23.049 "compare_and_write": false, 00:20:23.049 "abort": true, 00:20:23.049 "nvme_admin": false, 00:20:23.049 "nvme_io": false 00:20:23.049 }, 00:20:23.049 "memory_domains": [ 00:20:23.049 { 00:20:23.049 "dma_device_id": "system", 00:20:23.049 "dma_device_type": 1 00:20:23.049 }, 00:20:23.049 { 00:20:23.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.049 "dma_device_type": 2 00:20:23.049 } 00:20:23.049 ], 00:20:23.049 "driver_specific": {} 00:20:23.049 } 00:20:23.049 ] 00:20:23.049 00:39:56 -- common/autotest_common.sh@893 -- # return 0 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.049 00:39:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.307 00:39:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.307 "name": "Existed_Raid", 00:20:23.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.307 "strip_size_kb": 0, 00:20:23.307 "state": "configuring", 00:20:23.307 "raid_level": "raid1", 00:20:23.307 "superblock": false, 00:20:23.307 "num_base_bdevs": 4, 00:20:23.307 "num_base_bdevs_discovered": 1, 00:20:23.307 "num_base_bdevs_operational": 4, 00:20:23.307 "base_bdevs_list": [ 00:20:23.307 { 00:20:23.307 "name": "BaseBdev1", 00:20:23.307 "uuid": "1880db7d-5886-4e6f-b3c0-629a50b5f0ab", 00:20:23.307 "is_configured": true, 00:20:23.307 "data_offset": 0, 00:20:23.307 "data_size": 65536 00:20:23.307 }, 00:20:23.307 { 00:20:23.307 "name": "BaseBdev2", 00:20:23.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.307 "is_configured": false, 00:20:23.307 "data_offset": 0, 00:20:23.307 "data_size": 0 00:20:23.307 }, 00:20:23.307 { 00:20:23.307 "name": "BaseBdev3", 00:20:23.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.307 "is_configured": false, 00:20:23.307 "data_offset": 0, 00:20:23.307 "data_size": 0 00:20:23.307 }, 00:20:23.307 { 00:20:23.307 "name": "BaseBdev4", 00:20:23.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.307 "is_configured": false, 00:20:23.307 "data_offset": 0, 00:20:23.307 "data_size": 0 00:20:23.307 } 00:20:23.307 ] 00:20:23.307 }' 00:20:23.307 00:39:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.307 00:39:56 -- common/autotest_common.sh@10 -- # set +x 00:20:23.872 00:39:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:24.131 [2024-04-27 00:39:57.616179] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:24.131 [2024-04-27 00:39:57.616438] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:20:24.131 00:39:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:20:24.131 00:39:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:24.390 [2024-04-27 00:39:57.836265] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.390 [2024-04-27 00:39:57.838623] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:24.390 [2024-04-27 00:39:57.838908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:24.390 [2024-04-27 00:39:57.839075] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:24.390 [2024-04-27 00:39:57.839148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:24.390 [2024-04-27 00:39:57.839333] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:24.390 [2024-04-27 00:39:57.839395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.390 00:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.648 00:39:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.648 "name": "Existed_Raid", 00:20:24.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.648 "strip_size_kb": 0, 00:20:24.648 "state": "configuring", 00:20:24.648 "raid_level": "raid1", 00:20:24.648 "superblock": false, 00:20:24.648 "num_base_bdevs": 4, 00:20:24.648 "num_base_bdevs_discovered": 1, 00:20:24.648 "num_base_bdevs_operational": 4, 00:20:24.648 "base_bdevs_list": [ 00:20:24.648 { 00:20:24.648 "name": "BaseBdev1", 00:20:24.648 "uuid": "1880db7d-5886-4e6f-b3c0-629a50b5f0ab", 00:20:24.648 "is_configured": true, 00:20:24.648 "data_offset": 0, 00:20:24.648 "data_size": 65536 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "name": "BaseBdev2", 00:20:24.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.648 "is_configured": false, 00:20:24.648 "data_offset": 0, 00:20:24.648 "data_size": 0 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "name": "BaseBdev3", 00:20:24.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.648 "is_configured": false, 00:20:24.648 "data_offset": 0, 00:20:24.648 "data_size": 0 00:20:24.648 }, 00:20:24.648 { 00:20:24.648 "name": "BaseBdev4", 00:20:24.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.648 "is_configured": false, 00:20:24.648 "data_offset": 0, 00:20:24.648 "data_size": 0 00:20:24.648 } 00:20:24.648 ] 00:20:24.648 }' 00:20:24.648 00:39:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.648 00:39:58 -- common/autotest_common.sh@10 -- # set +x 00:20:25.216 00:39:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:25.474 [2024-04-27 00:39:58.997337] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:25.474 BaseBdev2 00:20:25.474 00:39:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:25.474 00:39:59 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:20:25.474 00:39:59 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:25.474 00:39:59 -- common/autotest_common.sh@887 -- # local i 00:20:25.474 00:39:59 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:25.474 00:39:59 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:25.474 00:39:59 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:25.732 00:39:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:25.991 [ 00:20:25.991 { 00:20:25.991 "name": "BaseBdev2", 00:20:25.991 "aliases": [ 00:20:25.991 "dfae53f1-a276-4921-9caa-a3bcaf30d9f2" 00:20:25.991 ], 00:20:25.991 "product_name": "Malloc disk", 00:20:25.991 "block_size": 512, 00:20:25.991 "num_blocks": 65536, 00:20:25.991 "uuid": "dfae53f1-a276-4921-9caa-a3bcaf30d9f2", 00:20:25.991 "assigned_rate_limits": { 00:20:25.991 "rw_ios_per_sec": 0, 00:20:25.991 "rw_mbytes_per_sec": 0, 00:20:25.991 "r_mbytes_per_sec": 0, 00:20:25.991 "w_mbytes_per_sec": 0 00:20:25.991 }, 00:20:25.991 "claimed": true, 00:20:25.991 "claim_type": "exclusive_write", 00:20:25.991 "zoned": false, 00:20:25.991 "supported_io_types": { 00:20:25.991 "read": true, 00:20:25.991 "write": true, 00:20:25.991 "unmap": true, 00:20:25.991 "write_zeroes": true, 00:20:25.991 "flush": true, 00:20:25.991 "reset": true, 00:20:25.991 "compare": false, 00:20:25.991 "compare_and_write": false, 00:20:25.991 "abort": true, 00:20:25.991 "nvme_admin": false, 00:20:25.991 "nvme_io": false 00:20:25.991 }, 00:20:25.991 "memory_domains": [ 00:20:25.991 { 00:20:25.991 "dma_device_id": "system", 00:20:25.991 "dma_device_type": 1 00:20:25.991 }, 00:20:25.991 { 00:20:25.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:25.991 "dma_device_type": 2 00:20:25.991 } 00:20:25.991 ], 00:20:25.991 "driver_specific": {} 00:20:25.991 } 00:20:25.991 ] 00:20:25.991 00:39:59 -- common/autotest_common.sh@893 -- # return 0 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.991 00:39:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.250 00:39:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.250 "name": "Existed_Raid", 00:20:26.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.250 "strip_size_kb": 0, 00:20:26.250 "state": "configuring", 00:20:26.250 "raid_level": "raid1", 00:20:26.250 "superblock": false, 00:20:26.250 "num_base_bdevs": 4, 00:20:26.250 "num_base_bdevs_discovered": 2, 00:20:26.250 "num_base_bdevs_operational": 4, 00:20:26.250 "base_bdevs_list": [ 00:20:26.250 { 00:20:26.250 "name": "BaseBdev1", 00:20:26.250 "uuid": "1880db7d-5886-4e6f-b3c0-629a50b5f0ab", 00:20:26.250 "is_configured": true, 00:20:26.250 "data_offset": 0, 00:20:26.250 "data_size": 65536 00:20:26.250 }, 00:20:26.250 { 00:20:26.250 "name": "BaseBdev2", 00:20:26.250 "uuid": "dfae53f1-a276-4921-9caa-a3bcaf30d9f2", 00:20:26.250 "is_configured": true, 00:20:26.250 "data_offset": 0, 00:20:26.250 "data_size": 65536 00:20:26.250 }, 00:20:26.250 { 00:20:26.250 "name": "BaseBdev3", 00:20:26.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.250 "is_configured": false, 00:20:26.250 "data_offset": 0, 00:20:26.250 "data_size": 0 00:20:26.250 }, 00:20:26.250 { 00:20:26.250 "name": "BaseBdev4", 00:20:26.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.250 "is_configured": false, 00:20:26.250 "data_offset": 0, 00:20:26.250 "data_size": 0 00:20:26.250 } 00:20:26.250 ] 00:20:26.250 }' 00:20:26.250 00:39:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.250 00:39:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.816 00:40:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:27.076 [2024-04-27 00:40:00.601779] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.076 BaseBdev3 00:20:27.076 00:40:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:27.076 00:40:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:20:27.076 00:40:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:27.076 00:40:00 -- common/autotest_common.sh@887 -- # local i 00:20:27.076 00:40:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:27.076 00:40:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:27.076 00:40:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:27.335 00:40:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:27.594 [ 00:20:27.594 { 00:20:27.594 "name": "BaseBdev3", 00:20:27.594 "aliases": [ 00:20:27.594 "b60cc160-8e00-4f35-a285-507720dd0b07" 00:20:27.594 ], 00:20:27.594 "product_name": "Malloc disk", 00:20:27.594 "block_size": 512, 00:20:27.594 "num_blocks": 65536, 00:20:27.594 "uuid": "b60cc160-8e00-4f35-a285-507720dd0b07", 00:20:27.594 "assigned_rate_limits": { 00:20:27.594 "rw_ios_per_sec": 0, 00:20:27.594 "rw_mbytes_per_sec": 0, 00:20:27.594 "r_mbytes_per_sec": 0, 00:20:27.594 "w_mbytes_per_sec": 0 00:20:27.594 }, 00:20:27.594 "claimed": true, 00:20:27.594 "claim_type": "exclusive_write", 00:20:27.594 "zoned": false, 00:20:27.594 "supported_io_types": { 00:20:27.594 "read": true, 00:20:27.594 "write": true, 00:20:27.594 "unmap": true, 00:20:27.594 "write_zeroes": true, 00:20:27.594 "flush": true, 00:20:27.594 "reset": true, 00:20:27.594 "compare": false, 00:20:27.594 "compare_and_write": false, 00:20:27.594 "abort": true, 00:20:27.594 "nvme_admin": false, 00:20:27.594 "nvme_io": false 00:20:27.594 }, 00:20:27.594 "memory_domains": [ 00:20:27.594 { 00:20:27.594 "dma_device_id": "system", 00:20:27.594 "dma_device_type": 1 00:20:27.594 }, 00:20:27.594 { 00:20:27.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.594 "dma_device_type": 2 00:20:27.594 } 00:20:27.594 ], 00:20:27.594 "driver_specific": {} 00:20:27.594 } 00:20:27.594 ] 00:20:27.594 00:40:01 -- common/autotest_common.sh@893 -- # return 0 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.594 00:40:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.853 00:40:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:27.853 "name": "Existed_Raid", 00:20:27.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.853 "strip_size_kb": 0, 00:20:27.853 "state": "configuring", 00:20:27.853 "raid_level": "raid1", 00:20:27.853 "superblock": false, 00:20:27.853 "num_base_bdevs": 4, 00:20:27.853 "num_base_bdevs_discovered": 3, 00:20:27.853 "num_base_bdevs_operational": 4, 00:20:27.853 "base_bdevs_list": [ 00:20:27.853 { 00:20:27.853 "name": "BaseBdev1", 00:20:27.853 "uuid": "1880db7d-5886-4e6f-b3c0-629a50b5f0ab", 00:20:27.853 "is_configured": true, 00:20:27.853 "data_offset": 0, 00:20:27.853 "data_size": 65536 00:20:27.853 }, 00:20:27.853 { 00:20:27.853 "name": "BaseBdev2", 00:20:27.853 "uuid": "dfae53f1-a276-4921-9caa-a3bcaf30d9f2", 00:20:27.853 "is_configured": true, 00:20:27.853 "data_offset": 0, 00:20:27.853 "data_size": 65536 00:20:27.853 }, 00:20:27.853 { 00:20:27.853 "name": "BaseBdev3", 00:20:27.853 "uuid": "b60cc160-8e00-4f35-a285-507720dd0b07", 00:20:27.853 "is_configured": true, 00:20:27.853 "data_offset": 0, 00:20:27.853 "data_size": 65536 00:20:27.853 }, 00:20:27.853 { 00:20:27.853 "name": "BaseBdev4", 00:20:27.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.853 "is_configured": false, 00:20:27.853 "data_offset": 0, 00:20:27.853 "data_size": 0 00:20:27.853 } 00:20:27.853 ] 00:20:27.853 }' 00:20:27.853 00:40:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:27.853 00:40:01 -- common/autotest_common.sh@10 -- # set +x 00:20:28.421 00:40:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:28.680 [2024-04-27 00:40:02.177972] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:28.680 [2024-04-27 00:40:02.178281] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:28.680 [2024-04-27 00:40:02.178326] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:28.680 [2024-04-27 00:40:02.178542] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:28.680 [2024-04-27 00:40:02.179059] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:28.680 [2024-04-27 00:40:02.179248] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:20:28.680 [2024-04-27 00:40:02.179634] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.680 BaseBdev4 00:20:28.680 00:40:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:28.680 00:40:02 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:20:28.680 00:40:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:28.680 00:40:02 -- common/autotest_common.sh@887 -- # local i 00:20:28.680 00:40:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:28.680 00:40:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:28.680 00:40:02 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.940 00:40:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:29.201 [ 00:20:29.201 { 00:20:29.201 "name": "BaseBdev4", 00:20:29.201 "aliases": [ 00:20:29.201 "096690eb-9968-4c6b-af28-35e309fcdf20" 00:20:29.201 ], 00:20:29.201 "product_name": "Malloc disk", 00:20:29.201 "block_size": 512, 00:20:29.201 "num_blocks": 65536, 00:20:29.201 "uuid": "096690eb-9968-4c6b-af28-35e309fcdf20", 00:20:29.201 "assigned_rate_limits": { 00:20:29.201 "rw_ios_per_sec": 0, 00:20:29.201 "rw_mbytes_per_sec": 0, 00:20:29.201 "r_mbytes_per_sec": 0, 00:20:29.201 "w_mbytes_per_sec": 0 00:20:29.201 }, 00:20:29.201 "claimed": true, 00:20:29.201 "claim_type": "exclusive_write", 00:20:29.201 "zoned": false, 00:20:29.201 "supported_io_types": { 00:20:29.201 "read": true, 00:20:29.201 "write": true, 00:20:29.201 "unmap": true, 00:20:29.201 "write_zeroes": true, 00:20:29.201 "flush": true, 00:20:29.201 "reset": true, 00:20:29.201 "compare": false, 00:20:29.201 "compare_and_write": false, 00:20:29.201 "abort": true, 00:20:29.201 "nvme_admin": false, 00:20:29.201 "nvme_io": false 00:20:29.201 }, 00:20:29.201 "memory_domains": [ 00:20:29.201 { 00:20:29.201 "dma_device_id": "system", 00:20:29.201 "dma_device_type": 1 00:20:29.201 }, 00:20:29.201 { 00:20:29.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.201 "dma_device_type": 2 00:20:29.201 } 00:20:29.201 ], 00:20:29.201 "driver_specific": {} 00:20:29.201 } 00:20:29.201 ] 00:20:29.201 00:40:02 -- common/autotest_common.sh@893 -- # return 0 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.201 00:40:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.459 00:40:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.459 "name": "Existed_Raid", 00:20:29.459 "uuid": "312538d7-b5f0-4c41-b386-8055d956f428", 00:20:29.459 "strip_size_kb": 0, 00:20:29.459 "state": "online", 00:20:29.459 "raid_level": "raid1", 00:20:29.459 "superblock": false, 00:20:29.459 "num_base_bdevs": 4, 00:20:29.459 "num_base_bdevs_discovered": 4, 00:20:29.459 "num_base_bdevs_operational": 4, 00:20:29.459 "base_bdevs_list": [ 00:20:29.459 { 00:20:29.459 "name": "BaseBdev1", 00:20:29.459 "uuid": "1880db7d-5886-4e6f-b3c0-629a50b5f0ab", 00:20:29.459 "is_configured": true, 00:20:29.459 "data_offset": 0, 00:20:29.459 "data_size": 65536 00:20:29.459 }, 00:20:29.459 { 00:20:29.459 "name": "BaseBdev2", 00:20:29.459 "uuid": "dfae53f1-a276-4921-9caa-a3bcaf30d9f2", 00:20:29.459 "is_configured": true, 00:20:29.459 "data_offset": 0, 00:20:29.459 "data_size": 65536 00:20:29.459 }, 00:20:29.459 { 00:20:29.459 "name": "BaseBdev3", 00:20:29.459 "uuid": "b60cc160-8e00-4f35-a285-507720dd0b07", 00:20:29.459 "is_configured": true, 00:20:29.459 "data_offset": 0, 00:20:29.459 "data_size": 65536 00:20:29.459 }, 00:20:29.459 { 00:20:29.459 "name": "BaseBdev4", 00:20:29.459 "uuid": "096690eb-9968-4c6b-af28-35e309fcdf20", 00:20:29.459 "is_configured": true, 00:20:29.459 "data_offset": 0, 00:20:29.459 "data_size": 65536 00:20:29.459 } 00:20:29.459 ] 00:20:29.459 }' 00:20:29.459 00:40:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.459 00:40:03 -- common/autotest_common.sh@10 -- # set +x 00:20:30.391 00:40:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:30.391 [2024-04-27 00:40:03.882531] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.392 00:40:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.650 00:40:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.650 00:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.650 "name": "Existed_Raid", 00:20:30.650 "uuid": "312538d7-b5f0-4c41-b386-8055d956f428", 00:20:30.650 "strip_size_kb": 0, 00:20:30.650 "state": "online", 00:20:30.650 "raid_level": "raid1", 00:20:30.650 "superblock": false, 00:20:30.650 "num_base_bdevs": 4, 00:20:30.650 "num_base_bdevs_discovered": 3, 00:20:30.650 "num_base_bdevs_operational": 3, 00:20:30.650 "base_bdevs_list": [ 00:20:30.650 { 00:20:30.650 "name": null, 00:20:30.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.650 "is_configured": false, 00:20:30.650 "data_offset": 0, 00:20:30.650 "data_size": 65536 00:20:30.650 }, 00:20:30.650 { 00:20:30.650 "name": "BaseBdev2", 00:20:30.650 "uuid": "dfae53f1-a276-4921-9caa-a3bcaf30d9f2", 00:20:30.650 "is_configured": true, 00:20:30.650 "data_offset": 0, 00:20:30.650 "data_size": 65536 00:20:30.650 }, 00:20:30.650 { 00:20:30.650 "name": "BaseBdev3", 00:20:30.650 "uuid": "b60cc160-8e00-4f35-a285-507720dd0b07", 00:20:30.650 "is_configured": true, 00:20:30.650 "data_offset": 0, 00:20:30.650 "data_size": 65536 00:20:30.650 }, 00:20:30.650 { 00:20:30.650 "name": "BaseBdev4", 00:20:30.650 "uuid": "096690eb-9968-4c6b-af28-35e309fcdf20", 00:20:30.650 "is_configured": true, 00:20:30.650 "data_offset": 0, 00:20:30.650 "data_size": 65536 00:20:30.650 } 00:20:30.650 ] 00:20:30.650 }' 00:20:30.650 00:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.650 00:40:04 -- common/autotest_common.sh@10 -- # set +x 00:20:31.584 00:40:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:31.584 00:40:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.584 00:40:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.584 00:40:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:31.584 00:40:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:31.584 00:40:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:31.584 00:40:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:31.842 [2024-04-27 00:40:05.328083] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:31.842 00:40:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:31.842 00:40:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:31.842 00:40:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.842 00:40:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:32.100 00:40:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:32.100 00:40:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.100 00:40:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:32.358 [2024-04-27 00:40:05.941400] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:32.617 00:40:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:32.617 00:40:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:32.617 00:40:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:32.617 00:40:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.875 00:40:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:32.875 00:40:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.875 00:40:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:33.133 [2024-04-27 00:40:06.545648] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:33.133 [2024-04-27 00:40:06.545968] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.133 [2024-04-27 00:40:06.612799] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.133 [2024-04-27 00:40:06.613085] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.133 [2024-04-27 00:40:06.614637] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:20:33.133 00:40:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:33.133 00:40:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:33.133 00:40:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.133 00:40:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:33.407 00:40:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:33.407 00:40:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:33.407 00:40:06 -- bdev/bdev_raid.sh@287 -- # killprocess 128563 00:20:33.407 00:40:06 -- common/autotest_common.sh@936 -- # '[' -z 128563 ']' 00:20:33.407 00:40:06 -- common/autotest_common.sh@940 -- # kill -0 128563 00:20:33.407 00:40:06 -- common/autotest_common.sh@941 -- # uname 00:20:33.407 00:40:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.407 00:40:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 128563 00:20:33.407 00:40:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:33.407 00:40:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:33.407 00:40:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 128563' 00:20:33.407 killing process with pid 128563 00:20:33.407 00:40:06 -- common/autotest_common.sh@955 -- # kill 128563 00:20:33.407 [2024-04-27 00:40:06.858589] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:33.407 00:40:06 -- common/autotest_common.sh@960 -- # wait 128563 00:20:33.407 [2024-04-27 00:40:06.858891] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:34.344 00:40:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:34.344 00:20:34.344 real 0m14.718s 00:20:34.344 user 0m26.251s 00:20:34.344 sys 0m1.769s 00:20:34.344 00:40:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.344 ************************************ 00:20:34.344 END TEST raid_state_function_test 00:20:34.344 ************************************ 00:20:34.344 00:40:07 -- common/autotest_common.sh@10 -- # set +x 00:20:34.344 00:40:07 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:20:34.344 00:40:07 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:34.344 00:40:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:34.344 00:40:07 -- common/autotest_common.sh@10 -- # set +x 00:20:34.603 ************************************ 00:20:34.603 START TEST raid_state_function_test_sb 00:20:34.603 ************************************ 00:20:34.603 00:40:07 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid1 4 true 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=129006 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:34.603 Process raid pid: 129006 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129006' 00:20:34.603 00:40:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129006 /var/tmp/spdk-raid.sock 00:20:34.603 00:40:07 -- common/autotest_common.sh@817 -- # '[' -z 129006 ']' 00:20:34.603 00:40:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:34.603 00:40:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.603 00:40:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:34.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:34.603 00:40:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.603 00:40:07 -- common/autotest_common.sh@10 -- # set +x 00:20:34.603 [2024-04-27 00:40:08.018984] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:34.603 [2024-04-27 00:40:08.019384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.861 [2024-04-27 00:40:08.191011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.861 [2024-04-27 00:40:08.396411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.133 [2024-04-27 00:40:08.567512] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.402 00:40:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.402 00:40:08 -- common/autotest_common.sh@850 -- # return 0 00:20:35.402 00:40:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:35.660 [2024-04-27 00:40:09.116767] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:35.660 [2024-04-27 00:40:09.117027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:35.660 [2024-04-27 00:40:09.117149] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:35.660 [2024-04-27 00:40:09.117234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:35.660 [2024-04-27 00:40:09.117449] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:35.660 [2024-04-27 00:40:09.117532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:35.660 [2024-04-27 00:40:09.117755] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:35.660 [2024-04-27 00:40:09.117821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.660 00:40:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.922 00:40:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.922 "name": "Existed_Raid", 00:20:35.922 "uuid": "1cf11c6b-3957-4683-a7f2-c426274a4f75", 00:20:35.922 "strip_size_kb": 0, 00:20:35.922 "state": "configuring", 00:20:35.922 "raid_level": "raid1", 00:20:35.922 "superblock": true, 00:20:35.922 "num_base_bdevs": 4, 00:20:35.922 "num_base_bdevs_discovered": 0, 00:20:35.922 "num_base_bdevs_operational": 4, 00:20:35.922 "base_bdevs_list": [ 00:20:35.922 { 00:20:35.922 "name": "BaseBdev1", 00:20:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.922 "is_configured": false, 00:20:35.922 "data_offset": 0, 00:20:35.922 "data_size": 0 00:20:35.922 }, 00:20:35.922 { 00:20:35.922 "name": "BaseBdev2", 00:20:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.922 "is_configured": false, 00:20:35.922 "data_offset": 0, 00:20:35.922 "data_size": 0 00:20:35.922 }, 00:20:35.922 { 00:20:35.922 "name": "BaseBdev3", 00:20:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.922 "is_configured": false, 00:20:35.922 "data_offset": 0, 00:20:35.922 "data_size": 0 00:20:35.922 }, 00:20:35.922 { 00:20:35.922 "name": "BaseBdev4", 00:20:35.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.922 "is_configured": false, 00:20:35.922 "data_offset": 0, 00:20:35.922 "data_size": 0 00:20:35.922 } 00:20:35.922 ] 00:20:35.922 }' 00:20:35.922 00:40:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.922 00:40:09 -- common/autotest_common.sh@10 -- # set +x 00:20:36.489 00:40:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:36.748 [2024-04-27 00:40:10.264830] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.748 [2024-04-27 00:40:10.265045] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:20:36.748 00:40:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:37.005 [2024-04-27 00:40:10.524922] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:37.005 [2024-04-27 00:40:10.525154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:37.005 [2024-04-27 00:40:10.525269] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:37.005 [2024-04-27 00:40:10.525334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:37.006 [2024-04-27 00:40:10.525432] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:37.006 [2024-04-27 00:40:10.525513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:37.006 [2024-04-27 00:40:10.525633] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:37.006 [2024-04-27 00:40:10.525698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:37.006 00:40:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:37.263 [2024-04-27 00:40:10.807319] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:37.263 BaseBdev1 00:20:37.263 00:40:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:37.263 00:40:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:37.263 00:40:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:37.263 00:40:10 -- common/autotest_common.sh@887 -- # local i 00:20:37.263 00:40:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:37.263 00:40:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:37.263 00:40:10 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:37.522 00:40:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:37.780 [ 00:20:37.781 { 00:20:37.781 "name": "BaseBdev1", 00:20:37.781 "aliases": [ 00:20:37.781 "ceb3d90c-649b-49fc-9ac8-9aa38ce4648e" 00:20:37.781 ], 00:20:37.781 "product_name": "Malloc disk", 00:20:37.781 "block_size": 512, 00:20:37.781 "num_blocks": 65536, 00:20:37.781 "uuid": "ceb3d90c-649b-49fc-9ac8-9aa38ce4648e", 00:20:37.781 "assigned_rate_limits": { 00:20:37.781 "rw_ios_per_sec": 0, 00:20:37.781 "rw_mbytes_per_sec": 0, 00:20:37.781 "r_mbytes_per_sec": 0, 00:20:37.781 "w_mbytes_per_sec": 0 00:20:37.781 }, 00:20:37.781 "claimed": true, 00:20:37.781 "claim_type": "exclusive_write", 00:20:37.781 "zoned": false, 00:20:37.781 "supported_io_types": { 00:20:37.781 "read": true, 00:20:37.781 "write": true, 00:20:37.781 "unmap": true, 00:20:37.781 "write_zeroes": true, 00:20:37.781 "flush": true, 00:20:37.781 "reset": true, 00:20:37.781 "compare": false, 00:20:37.781 "compare_and_write": false, 00:20:37.781 "abort": true, 00:20:37.781 "nvme_admin": false, 00:20:37.781 "nvme_io": false 00:20:37.781 }, 00:20:37.781 "memory_domains": [ 00:20:37.781 { 00:20:37.781 "dma_device_id": "system", 00:20:37.781 "dma_device_type": 1 00:20:37.781 }, 00:20:37.781 { 00:20:37.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.781 "dma_device_type": 2 00:20:37.781 } 00:20:37.781 ], 00:20:37.781 "driver_specific": {} 00:20:37.781 } 00:20:37.781 ] 00:20:37.781 00:40:11 -- common/autotest_common.sh@893 -- # return 0 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.781 00:40:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.038 00:40:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:38.038 "name": "Existed_Raid", 00:20:38.039 "uuid": "94b5177f-e743-4237-a7f1-3a23376156c9", 00:20:38.039 "strip_size_kb": 0, 00:20:38.039 "state": "configuring", 00:20:38.039 "raid_level": "raid1", 00:20:38.039 "superblock": true, 00:20:38.039 "num_base_bdevs": 4, 00:20:38.039 "num_base_bdevs_discovered": 1, 00:20:38.039 "num_base_bdevs_operational": 4, 00:20:38.039 "base_bdevs_list": [ 00:20:38.039 { 00:20:38.039 "name": "BaseBdev1", 00:20:38.039 "uuid": "ceb3d90c-649b-49fc-9ac8-9aa38ce4648e", 00:20:38.039 "is_configured": true, 00:20:38.039 "data_offset": 2048, 00:20:38.039 "data_size": 63488 00:20:38.039 }, 00:20:38.039 { 00:20:38.039 "name": "BaseBdev2", 00:20:38.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.039 "is_configured": false, 00:20:38.039 "data_offset": 0, 00:20:38.039 "data_size": 0 00:20:38.039 }, 00:20:38.039 { 00:20:38.039 "name": "BaseBdev3", 00:20:38.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.039 "is_configured": false, 00:20:38.039 "data_offset": 0, 00:20:38.039 "data_size": 0 00:20:38.039 }, 00:20:38.039 { 00:20:38.039 "name": "BaseBdev4", 00:20:38.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.039 "is_configured": false, 00:20:38.039 "data_offset": 0, 00:20:38.039 "data_size": 0 00:20:38.039 } 00:20:38.039 ] 00:20:38.039 }' 00:20:38.039 00:40:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:38.039 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:20:38.603 00:40:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:38.861 [2024-04-27 00:40:12.343723] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:38.861 [2024-04-27 00:40:12.343905] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:20:38.861 00:40:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:38.861 00:40:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:39.120 00:40:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:39.380 BaseBdev1 00:20:39.380 00:40:12 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:39.380 00:40:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:20:39.380 00:40:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:39.380 00:40:12 -- common/autotest_common.sh@887 -- # local i 00:20:39.380 00:40:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:39.380 00:40:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:39.380 00:40:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:39.637 00:40:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:39.895 [ 00:20:39.895 { 00:20:39.895 "name": "BaseBdev1", 00:20:39.895 "aliases": [ 00:20:39.895 "e77c04a0-32a0-4a91-a40b-a73ac9d19fb2" 00:20:39.895 ], 00:20:39.895 "product_name": "Malloc disk", 00:20:39.895 "block_size": 512, 00:20:39.895 "num_blocks": 65536, 00:20:39.895 "uuid": "e77c04a0-32a0-4a91-a40b-a73ac9d19fb2", 00:20:39.895 "assigned_rate_limits": { 00:20:39.895 "rw_ios_per_sec": 0, 00:20:39.895 "rw_mbytes_per_sec": 0, 00:20:39.895 "r_mbytes_per_sec": 0, 00:20:39.895 "w_mbytes_per_sec": 0 00:20:39.895 }, 00:20:39.895 "claimed": false, 00:20:39.895 "zoned": false, 00:20:39.895 "supported_io_types": { 00:20:39.895 "read": true, 00:20:39.895 "write": true, 00:20:39.895 "unmap": true, 00:20:39.895 "write_zeroes": true, 00:20:39.895 "flush": true, 00:20:39.895 "reset": true, 00:20:39.895 "compare": false, 00:20:39.895 "compare_and_write": false, 00:20:39.895 "abort": true, 00:20:39.895 "nvme_admin": false, 00:20:39.895 "nvme_io": false 00:20:39.895 }, 00:20:39.895 "memory_domains": [ 00:20:39.895 { 00:20:39.895 "dma_device_id": "system", 00:20:39.895 "dma_device_type": 1 00:20:39.895 }, 00:20:39.895 { 00:20:39.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.895 "dma_device_type": 2 00:20:39.895 } 00:20:39.895 ], 00:20:39.895 "driver_specific": {} 00:20:39.895 } 00:20:39.895 ] 00:20:39.895 00:40:13 -- common/autotest_common.sh@893 -- # return 0 00:20:39.895 00:40:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:40.153 [2024-04-27 00:40:13.639833] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.153 [2024-04-27 00:40:13.641762] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:40.153 [2024-04-27 00:40:13.641982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:40.153 [2024-04-27 00:40:13.642140] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:40.153 [2024-04-27 00:40:13.642206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:40.153 [2024-04-27 00:40:13.642385] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:40.153 [2024-04-27 00:40:13.642447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.153 00:40:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.412 00:40:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:40.412 "name": "Existed_Raid", 00:20:40.412 "uuid": "b41038de-d42f-47e2-aa02-c3f0d64e8d3b", 00:20:40.412 "strip_size_kb": 0, 00:20:40.412 "state": "configuring", 00:20:40.412 "raid_level": "raid1", 00:20:40.412 "superblock": true, 00:20:40.412 "num_base_bdevs": 4, 00:20:40.412 "num_base_bdevs_discovered": 1, 00:20:40.412 "num_base_bdevs_operational": 4, 00:20:40.412 "base_bdevs_list": [ 00:20:40.412 { 00:20:40.412 "name": "BaseBdev1", 00:20:40.412 "uuid": "e77c04a0-32a0-4a91-a40b-a73ac9d19fb2", 00:20:40.412 "is_configured": true, 00:20:40.412 "data_offset": 2048, 00:20:40.412 "data_size": 63488 00:20:40.412 }, 00:20:40.412 { 00:20:40.412 "name": "BaseBdev2", 00:20:40.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.412 "is_configured": false, 00:20:40.412 "data_offset": 0, 00:20:40.412 "data_size": 0 00:20:40.412 }, 00:20:40.412 { 00:20:40.412 "name": "BaseBdev3", 00:20:40.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.412 "is_configured": false, 00:20:40.412 "data_offset": 0, 00:20:40.412 "data_size": 0 00:20:40.412 }, 00:20:40.412 { 00:20:40.412 "name": "BaseBdev4", 00:20:40.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.412 "is_configured": false, 00:20:40.412 "data_offset": 0, 00:20:40.412 "data_size": 0 00:20:40.412 } 00:20:40.412 ] 00:20:40.412 }' 00:20:40.412 00:40:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:40.412 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:20:40.977 00:40:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:41.236 [2024-04-27 00:40:14.711299] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.236 BaseBdev2 00:20:41.236 00:40:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:41.236 00:40:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:20:41.236 00:40:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:41.236 00:40:14 -- common/autotest_common.sh@887 -- # local i 00:20:41.236 00:40:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:41.236 00:40:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:41.236 00:40:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:41.495 00:40:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:41.753 [ 00:20:41.753 { 00:20:41.753 "name": "BaseBdev2", 00:20:41.753 "aliases": [ 00:20:41.753 "b2f302b0-4309-4a51-b413-f8173aeae5d0" 00:20:41.753 ], 00:20:41.753 "product_name": "Malloc disk", 00:20:41.753 "block_size": 512, 00:20:41.753 "num_blocks": 65536, 00:20:41.753 "uuid": "b2f302b0-4309-4a51-b413-f8173aeae5d0", 00:20:41.753 "assigned_rate_limits": { 00:20:41.753 "rw_ios_per_sec": 0, 00:20:41.753 "rw_mbytes_per_sec": 0, 00:20:41.753 "r_mbytes_per_sec": 0, 00:20:41.753 "w_mbytes_per_sec": 0 00:20:41.753 }, 00:20:41.753 "claimed": true, 00:20:41.753 "claim_type": "exclusive_write", 00:20:41.753 "zoned": false, 00:20:41.753 "supported_io_types": { 00:20:41.753 "read": true, 00:20:41.753 "write": true, 00:20:41.753 "unmap": true, 00:20:41.753 "write_zeroes": true, 00:20:41.753 "flush": true, 00:20:41.753 "reset": true, 00:20:41.753 "compare": false, 00:20:41.753 "compare_and_write": false, 00:20:41.753 "abort": true, 00:20:41.753 "nvme_admin": false, 00:20:41.753 "nvme_io": false 00:20:41.753 }, 00:20:41.753 "memory_domains": [ 00:20:41.753 { 00:20:41.753 "dma_device_id": "system", 00:20:41.753 "dma_device_type": 1 00:20:41.753 }, 00:20:41.753 { 00:20:41.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.753 "dma_device_type": 2 00:20:41.753 } 00:20:41.753 ], 00:20:41.753 "driver_specific": {} 00:20:41.753 } 00:20:41.753 ] 00:20:41.753 00:40:15 -- common/autotest_common.sh@893 -- # return 0 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.753 00:40:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.012 00:40:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.012 "name": "Existed_Raid", 00:20:42.012 "uuid": "b41038de-d42f-47e2-aa02-c3f0d64e8d3b", 00:20:42.012 "strip_size_kb": 0, 00:20:42.012 "state": "configuring", 00:20:42.012 "raid_level": "raid1", 00:20:42.012 "superblock": true, 00:20:42.012 "num_base_bdevs": 4, 00:20:42.012 "num_base_bdevs_discovered": 2, 00:20:42.012 "num_base_bdevs_operational": 4, 00:20:42.012 "base_bdevs_list": [ 00:20:42.012 { 00:20:42.012 "name": "BaseBdev1", 00:20:42.012 "uuid": "e77c04a0-32a0-4a91-a40b-a73ac9d19fb2", 00:20:42.012 "is_configured": true, 00:20:42.012 "data_offset": 2048, 00:20:42.012 "data_size": 63488 00:20:42.012 }, 00:20:42.012 { 00:20:42.012 "name": "BaseBdev2", 00:20:42.012 "uuid": "b2f302b0-4309-4a51-b413-f8173aeae5d0", 00:20:42.012 "is_configured": true, 00:20:42.012 "data_offset": 2048, 00:20:42.012 "data_size": 63488 00:20:42.012 }, 00:20:42.012 { 00:20:42.012 "name": "BaseBdev3", 00:20:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.012 "is_configured": false, 00:20:42.012 "data_offset": 0, 00:20:42.012 "data_size": 0 00:20:42.012 }, 00:20:42.012 { 00:20:42.012 "name": "BaseBdev4", 00:20:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.012 "is_configured": false, 00:20:42.012 "data_offset": 0, 00:20:42.012 "data_size": 0 00:20:42.012 } 00:20:42.012 ] 00:20:42.012 }' 00:20:42.012 00:40:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.012 00:40:15 -- common/autotest_common.sh@10 -- # set +x 00:20:42.590 00:40:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:42.865 [2024-04-27 00:40:16.370630] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:42.865 BaseBdev3 00:20:42.865 00:40:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:42.865 00:40:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:20:42.865 00:40:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:42.865 00:40:16 -- common/autotest_common.sh@887 -- # local i 00:20:42.865 00:40:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:42.865 00:40:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:42.865 00:40:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:43.123 00:40:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:43.382 [ 00:20:43.382 { 00:20:43.382 "name": "BaseBdev3", 00:20:43.382 "aliases": [ 00:20:43.382 "8f48e107-f3ba-4e76-a034-f341833946c5" 00:20:43.382 ], 00:20:43.382 "product_name": "Malloc disk", 00:20:43.382 "block_size": 512, 00:20:43.382 "num_blocks": 65536, 00:20:43.382 "uuid": "8f48e107-f3ba-4e76-a034-f341833946c5", 00:20:43.382 "assigned_rate_limits": { 00:20:43.382 "rw_ios_per_sec": 0, 00:20:43.382 "rw_mbytes_per_sec": 0, 00:20:43.382 "r_mbytes_per_sec": 0, 00:20:43.382 "w_mbytes_per_sec": 0 00:20:43.382 }, 00:20:43.382 "claimed": true, 00:20:43.382 "claim_type": "exclusive_write", 00:20:43.382 "zoned": false, 00:20:43.382 "supported_io_types": { 00:20:43.382 "read": true, 00:20:43.382 "write": true, 00:20:43.382 "unmap": true, 00:20:43.382 "write_zeroes": true, 00:20:43.382 "flush": true, 00:20:43.382 "reset": true, 00:20:43.382 "compare": false, 00:20:43.382 "compare_and_write": false, 00:20:43.382 "abort": true, 00:20:43.382 "nvme_admin": false, 00:20:43.382 "nvme_io": false 00:20:43.382 }, 00:20:43.382 "memory_domains": [ 00:20:43.382 { 00:20:43.382 "dma_device_id": "system", 00:20:43.382 "dma_device_type": 1 00:20:43.382 }, 00:20:43.382 { 00:20:43.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.382 "dma_device_type": 2 00:20:43.382 } 00:20:43.382 ], 00:20:43.382 "driver_specific": {} 00:20:43.382 } 00:20:43.382 ] 00:20:43.382 00:40:16 -- common/autotest_common.sh@893 -- # return 0 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.382 00:40:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.641 00:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.641 "name": "Existed_Raid", 00:20:43.641 "uuid": "b41038de-d42f-47e2-aa02-c3f0d64e8d3b", 00:20:43.641 "strip_size_kb": 0, 00:20:43.641 "state": "configuring", 00:20:43.641 "raid_level": "raid1", 00:20:43.641 "superblock": true, 00:20:43.641 "num_base_bdevs": 4, 00:20:43.641 "num_base_bdevs_discovered": 3, 00:20:43.641 "num_base_bdevs_operational": 4, 00:20:43.641 "base_bdevs_list": [ 00:20:43.641 { 00:20:43.641 "name": "BaseBdev1", 00:20:43.641 "uuid": "e77c04a0-32a0-4a91-a40b-a73ac9d19fb2", 00:20:43.641 "is_configured": true, 00:20:43.641 "data_offset": 2048, 00:20:43.641 "data_size": 63488 00:20:43.641 }, 00:20:43.641 { 00:20:43.641 "name": "BaseBdev2", 00:20:43.641 "uuid": "b2f302b0-4309-4a51-b413-f8173aeae5d0", 00:20:43.641 "is_configured": true, 00:20:43.641 "data_offset": 2048, 00:20:43.641 "data_size": 63488 00:20:43.641 }, 00:20:43.641 { 00:20:43.641 "name": "BaseBdev3", 00:20:43.641 "uuid": "8f48e107-f3ba-4e76-a034-f341833946c5", 00:20:43.641 "is_configured": true, 00:20:43.641 "data_offset": 2048, 00:20:43.641 "data_size": 63488 00:20:43.641 }, 00:20:43.641 { 00:20:43.641 "name": "BaseBdev4", 00:20:43.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.641 "is_configured": false, 00:20:43.641 "data_offset": 0, 00:20:43.641 "data_size": 0 00:20:43.641 } 00:20:43.641 ] 00:20:43.641 }' 00:20:43.641 00:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.641 00:40:17 -- common/autotest_common.sh@10 -- # set +x 00:20:44.574 00:40:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:44.574 [2024-04-27 00:40:18.028542] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:44.574 [2024-04-27 00:40:18.029077] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:44.574 [2024-04-27 00:40:18.029206] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:44.574 [2024-04-27 00:40:18.029389] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:44.574 BaseBdev4 00:20:44.574 [2024-04-27 00:40:18.029854] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:44.574 [2024-04-27 00:40:18.029870] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:20:44.574 [2024-04-27 00:40:18.030059] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.574 00:40:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:44.574 00:40:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:20:44.574 00:40:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:44.574 00:40:18 -- common/autotest_common.sh@887 -- # local i 00:20:44.574 00:40:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:44.574 00:40:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:44.574 00:40:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.832 00:40:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:45.091 [ 00:20:45.091 { 00:20:45.091 "name": "BaseBdev4", 00:20:45.091 "aliases": [ 00:20:45.091 "0737ef56-be83-4625-a451-5052c890c56e" 00:20:45.091 ], 00:20:45.091 "product_name": "Malloc disk", 00:20:45.091 "block_size": 512, 00:20:45.091 "num_blocks": 65536, 00:20:45.091 "uuid": "0737ef56-be83-4625-a451-5052c890c56e", 00:20:45.091 "assigned_rate_limits": { 00:20:45.091 "rw_ios_per_sec": 0, 00:20:45.091 "rw_mbytes_per_sec": 0, 00:20:45.091 "r_mbytes_per_sec": 0, 00:20:45.091 "w_mbytes_per_sec": 0 00:20:45.091 }, 00:20:45.091 "claimed": true, 00:20:45.091 "claim_type": "exclusive_write", 00:20:45.091 "zoned": false, 00:20:45.091 "supported_io_types": { 00:20:45.091 "read": true, 00:20:45.091 "write": true, 00:20:45.091 "unmap": true, 00:20:45.091 "write_zeroes": true, 00:20:45.091 "flush": true, 00:20:45.091 "reset": true, 00:20:45.091 "compare": false, 00:20:45.091 "compare_and_write": false, 00:20:45.091 "abort": true, 00:20:45.091 "nvme_admin": false, 00:20:45.091 "nvme_io": false 00:20:45.091 }, 00:20:45.091 "memory_domains": [ 00:20:45.091 { 00:20:45.091 "dma_device_id": "system", 00:20:45.091 "dma_device_type": 1 00:20:45.091 }, 00:20:45.091 { 00:20:45.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.091 "dma_device_type": 2 00:20:45.091 } 00:20:45.091 ], 00:20:45.091 "driver_specific": {} 00:20:45.091 } 00:20:45.091 ] 00:20:45.091 00:40:18 -- common/autotest_common.sh@893 -- # return 0 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.091 00:40:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.349 00:40:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:45.349 "name": "Existed_Raid", 00:20:45.349 "uuid": "b41038de-d42f-47e2-aa02-c3f0d64e8d3b", 00:20:45.349 "strip_size_kb": 0, 00:20:45.349 "state": "online", 00:20:45.349 "raid_level": "raid1", 00:20:45.349 "superblock": true, 00:20:45.349 "num_base_bdevs": 4, 00:20:45.349 "num_base_bdevs_discovered": 4, 00:20:45.349 "num_base_bdevs_operational": 4, 00:20:45.349 "base_bdevs_list": [ 00:20:45.349 { 00:20:45.349 "name": "BaseBdev1", 00:20:45.349 "uuid": "e77c04a0-32a0-4a91-a40b-a73ac9d19fb2", 00:20:45.349 "is_configured": true, 00:20:45.349 "data_offset": 2048, 00:20:45.349 "data_size": 63488 00:20:45.349 }, 00:20:45.349 { 00:20:45.349 "name": "BaseBdev2", 00:20:45.349 "uuid": "b2f302b0-4309-4a51-b413-f8173aeae5d0", 00:20:45.349 "is_configured": true, 00:20:45.349 "data_offset": 2048, 00:20:45.349 "data_size": 63488 00:20:45.349 }, 00:20:45.349 { 00:20:45.349 "name": "BaseBdev3", 00:20:45.349 "uuid": "8f48e107-f3ba-4e76-a034-f341833946c5", 00:20:45.349 "is_configured": true, 00:20:45.349 "data_offset": 2048, 00:20:45.349 "data_size": 63488 00:20:45.349 }, 00:20:45.349 { 00:20:45.349 "name": "BaseBdev4", 00:20:45.349 "uuid": "0737ef56-be83-4625-a451-5052c890c56e", 00:20:45.349 "is_configured": true, 00:20:45.349 "data_offset": 2048, 00:20:45.349 "data_size": 63488 00:20:45.349 } 00:20:45.349 ] 00:20:45.349 }' 00:20:45.349 00:40:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:45.349 00:40:18 -- common/autotest_common.sh@10 -- # set +x 00:20:45.914 00:40:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:46.172 [2024-04-27 00:40:19.580991] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.172 00:40:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.430 00:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.430 "name": "Existed_Raid", 00:20:46.430 "uuid": "b41038de-d42f-47e2-aa02-c3f0d64e8d3b", 00:20:46.430 "strip_size_kb": 0, 00:20:46.430 "state": "online", 00:20:46.430 "raid_level": "raid1", 00:20:46.430 "superblock": true, 00:20:46.430 "num_base_bdevs": 4, 00:20:46.430 "num_base_bdevs_discovered": 3, 00:20:46.430 "num_base_bdevs_operational": 3, 00:20:46.430 "base_bdevs_list": [ 00:20:46.430 { 00:20:46.430 "name": null, 00:20:46.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.430 "is_configured": false, 00:20:46.430 "data_offset": 2048, 00:20:46.430 "data_size": 63488 00:20:46.430 }, 00:20:46.430 { 00:20:46.431 "name": "BaseBdev2", 00:20:46.431 "uuid": "b2f302b0-4309-4a51-b413-f8173aeae5d0", 00:20:46.431 "is_configured": true, 00:20:46.431 "data_offset": 2048, 00:20:46.431 "data_size": 63488 00:20:46.431 }, 00:20:46.431 { 00:20:46.431 "name": "BaseBdev3", 00:20:46.431 "uuid": "8f48e107-f3ba-4e76-a034-f341833946c5", 00:20:46.431 "is_configured": true, 00:20:46.431 "data_offset": 2048, 00:20:46.431 "data_size": 63488 00:20:46.431 }, 00:20:46.431 { 00:20:46.431 "name": "BaseBdev4", 00:20:46.431 "uuid": "0737ef56-be83-4625-a451-5052c890c56e", 00:20:46.431 "is_configured": true, 00:20:46.431 "data_offset": 2048, 00:20:46.431 "data_size": 63488 00:20:46.431 } 00:20:46.431 ] 00:20:46.431 }' 00:20:46.431 00:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.431 00:40:19 -- common/autotest_common.sh@10 -- # set +x 00:20:46.996 00:40:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:46.996 00:40:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:46.997 00:40:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.997 00:40:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:47.255 00:40:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:47.255 00:40:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:47.255 00:40:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:47.514 [2024-04-27 00:40:20.976196] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:47.514 00:40:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:47.514 00:40:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:47.514 00:40:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.514 00:40:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:47.772 00:40:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:47.772 00:40:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:47.772 00:40:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:48.032 [2024-04-27 00:40:21.491849] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:48.032 00:40:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:48.032 00:40:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:48.032 00:40:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.032 00:40:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:48.291 00:40:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:48.291 00:40:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:48.291 00:40:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:48.549 [2024-04-27 00:40:22.024385] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:48.549 [2024-04-27 00:40:22.024691] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.549 [2024-04-27 00:40:22.091029] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.549 [2024-04-27 00:40:22.091439] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.549 [2024-04-27 00:40:22.091606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:20:48.549 00:40:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:48.549 00:40:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:48.549 00:40:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.549 00:40:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:48.809 00:40:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:48.809 00:40:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:48.809 00:40:22 -- bdev/bdev_raid.sh@287 -- # killprocess 129006 00:20:48.809 00:40:22 -- common/autotest_common.sh@936 -- # '[' -z 129006 ']' 00:20:48.809 00:40:22 -- common/autotest_common.sh@940 -- # kill -0 129006 00:20:48.809 00:40:22 -- common/autotest_common.sh@941 -- # uname 00:20:48.809 00:40:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.809 00:40:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129006 00:20:48.809 killing process with pid 129006 00:20:48.809 00:40:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:48.809 00:40:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:48.809 00:40:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129006' 00:20:48.809 00:40:22 -- common/autotest_common.sh@955 -- # kill 129006 00:20:48.809 00:40:22 -- common/autotest_common.sh@960 -- # wait 129006 00:20:48.809 [2024-04-27 00:40:22.347444] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.809 [2024-04-27 00:40:22.347555] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.794 ************************************ 00:20:49.794 END TEST raid_state_function_test_sb 00:20:49.794 ************************************ 00:20:49.794 00:40:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:49.794 00:20:49.794 real 0m15.386s 00:20:49.794 user 0m27.455s 00:20:49.794 sys 0m1.843s 00:20:49.794 00:40:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:49.794 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:20:49.794 00:40:23 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:49.794 00:40:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:49.794 00:40:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.794 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.053 ************************************ 00:20:50.053 START TEST raid_superblock_test 00:20:50.053 ************************************ 00:20:50.053 00:40:23 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid1 4 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@357 -- # raid_pid=129472 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:50.053 00:40:23 -- bdev/bdev_raid.sh@358 -- # waitforlisten 129472 /var/tmp/spdk-raid.sock 00:20:50.053 00:40:23 -- common/autotest_common.sh@817 -- # '[' -z 129472 ']' 00:20:50.053 00:40:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:50.053 00:40:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:50.053 00:40:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:50.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:50.053 00:40:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:50.053 00:40:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.053 [2024-04-27 00:40:23.490079] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:20:50.053 [2024-04-27 00:40:23.490572] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129472 ] 00:20:50.312 [2024-04-27 00:40:23.654532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.312 [2024-04-27 00:40:23.843737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.570 [2024-04-27 00:40:24.022243] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.829 00:40:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:50.829 00:40:24 -- common/autotest_common.sh@850 -- # return 0 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:50.829 00:40:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:51.087 malloc1 00:20:51.087 00:40:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:51.345 [2024-04-27 00:40:24.845176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:51.346 [2024-04-27 00:40:24.845471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.346 [2024-04-27 00:40:24.845656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:51.346 [2024-04-27 00:40:24.845840] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.346 [2024-04-27 00:40:24.848491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.346 [2024-04-27 00:40:24.848692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:51.346 pt1 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:51.346 00:40:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:51.604 malloc2 00:20:51.604 00:40:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:51.863 [2024-04-27 00:40:25.323276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:51.863 [2024-04-27 00:40:25.323547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.863 [2024-04-27 00:40:25.323736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:51.863 [2024-04-27 00:40:25.323932] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.863 [2024-04-27 00:40:25.326349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.863 [2024-04-27 00:40:25.326593] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:51.863 pt2 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:51.863 00:40:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:52.122 malloc3 00:20:52.122 00:40:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:52.380 [2024-04-27 00:40:25.813726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:52.380 [2024-04-27 00:40:25.814014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.380 [2024-04-27 00:40:25.814287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:52.380 [2024-04-27 00:40:25.814551] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.380 [2024-04-27 00:40:25.817019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.380 [2024-04-27 00:40:25.817225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:52.380 pt3 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:52.380 00:40:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:52.637 malloc4 00:20:52.637 00:40:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:52.895 [2024-04-27 00:40:26.276656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:52.895 [2024-04-27 00:40:26.276940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.895 [2024-04-27 00:40:26.277129] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:52.895 [2024-04-27 00:40:26.277320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.895 [2024-04-27 00:40:26.280100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.895 [2024-04-27 00:40:26.280294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:52.895 pt4 00:20:52.895 00:40:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:52.895 00:40:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:52.895 00:40:26 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:53.153 [2024-04-27 00:40:26.488798] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:53.153 [2024-04-27 00:40:26.490858] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.153 [2024-04-27 00:40:26.491142] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:53.153 [2024-04-27 00:40:26.491374] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:53.153 [2024-04-27 00:40:26.491773] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:20:53.153 [2024-04-27 00:40:26.491903] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:53.153 [2024-04-27 00:40:26.492188] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:53.153 [2024-04-27 00:40:26.492700] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:20:53.153 [2024-04-27 00:40:26.492820] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:20:53.153 [2024-04-27 00:40:26.493195] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.153 00:40:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.412 00:40:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:53.412 "name": "raid_bdev1", 00:20:53.412 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:20:53.412 "strip_size_kb": 0, 00:20:53.412 "state": "online", 00:20:53.412 "raid_level": "raid1", 00:20:53.412 "superblock": true, 00:20:53.412 "num_base_bdevs": 4, 00:20:53.412 "num_base_bdevs_discovered": 4, 00:20:53.412 "num_base_bdevs_operational": 4, 00:20:53.412 "base_bdevs_list": [ 00:20:53.412 { 00:20:53.412 "name": "pt1", 00:20:53.412 "uuid": "a0a585d9-70bd-5627-8aa0-49063d5aaba0", 00:20:53.412 "is_configured": true, 00:20:53.412 "data_offset": 2048, 00:20:53.412 "data_size": 63488 00:20:53.412 }, 00:20:53.412 { 00:20:53.412 "name": "pt2", 00:20:53.412 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:20:53.412 "is_configured": true, 00:20:53.412 "data_offset": 2048, 00:20:53.412 "data_size": 63488 00:20:53.412 }, 00:20:53.412 { 00:20:53.412 "name": "pt3", 00:20:53.412 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:20:53.412 "is_configured": true, 00:20:53.412 "data_offset": 2048, 00:20:53.412 "data_size": 63488 00:20:53.412 }, 00:20:53.412 { 00:20:53.412 "name": "pt4", 00:20:53.412 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:20:53.412 "is_configured": true, 00:20:53.412 "data_offset": 2048, 00:20:53.412 "data_size": 63488 00:20:53.412 } 00:20:53.412 ] 00:20:53.412 }' 00:20:53.412 00:40:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:53.412 00:40:26 -- common/autotest_common.sh@10 -- # set +x 00:20:53.978 00:40:27 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:53.978 00:40:27 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:54.236 [2024-04-27 00:40:27.565637] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.236 00:40:27 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=c944ce48-8b56-4465-a350-56830542c12a 00:20:54.236 00:40:27 -- bdev/bdev_raid.sh@380 -- # '[' -z c944ce48-8b56-4465-a350-56830542c12a ']' 00:20:54.236 00:40:27 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:54.495 [2024-04-27 00:40:27.845353] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:54.495 [2024-04-27 00:40:27.845576] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:54.495 [2024-04-27 00:40:27.845768] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.495 [2024-04-27 00:40:27.845964] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:54.495 [2024-04-27 00:40:27.846064] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:20:54.495 00:40:27 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.495 00:40:27 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:54.753 00:40:28 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:54.753 00:40:28 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:54.753 00:40:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:54.753 00:40:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:54.753 00:40:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:54.753 00:40:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:55.011 00:40:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.011 00:40:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:55.269 00:40:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:55.269 00:40:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:55.527 00:40:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:55.527 00:40:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:55.785 00:40:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:55.785 00:40:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:55.785 00:40:29 -- common/autotest_common.sh@638 -- # local es=0 00:20:55.785 00:40:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:55.785 00:40:29 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.786 00:40:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:55.786 00:40:29 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.786 00:40:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:55.786 00:40:29 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.786 00:40:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:55.786 00:40:29 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.786 00:40:29 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:55.786 00:40:29 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:56.044 [2024-04-27 00:40:29.493682] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:56.044 [2024-04-27 00:40:29.495866] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:56.044 [2024-04-27 00:40:29.496050] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:56.044 [2024-04-27 00:40:29.496224] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:56.044 [2024-04-27 00:40:29.496380] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:56.044 [2024-04-27 00:40:29.496552] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:56.044 [2024-04-27 00:40:29.496685] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:56.044 [2024-04-27 00:40:29.496839] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:56.044 [2024-04-27 00:40:29.496956] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.044 [2024-04-27 00:40:29.497045] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:20:56.044 request: 00:20:56.044 { 00:20:56.044 "name": "raid_bdev1", 00:20:56.044 "raid_level": "raid1", 00:20:56.044 "base_bdevs": [ 00:20:56.044 "malloc1", 00:20:56.044 "malloc2", 00:20:56.044 "malloc3", 00:20:56.044 "malloc4" 00:20:56.044 ], 00:20:56.044 "superblock": false, 00:20:56.044 "method": "bdev_raid_create", 00:20:56.044 "req_id": 1 00:20:56.044 } 00:20:56.044 Got JSON-RPC error response 00:20:56.044 response: 00:20:56.044 { 00:20:56.044 "code": -17, 00:20:56.044 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:56.044 } 00:20:56.044 00:40:29 -- common/autotest_common.sh@641 -- # es=1 00:20:56.044 00:40:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:56.044 00:40:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:56.044 00:40:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:56.044 00:40:29 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.044 00:40:29 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:56.303 00:40:29 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:56.303 00:40:29 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:56.303 00:40:29 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:56.561 [2024-04-27 00:40:30.021788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:56.561 [2024-04-27 00:40:30.022064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.561 [2024-04-27 00:40:30.022139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:56.561 [2024-04-27 00:40:30.022443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.561 [2024-04-27 00:40:30.025357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.561 [2024-04-27 00:40:30.025576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:56.561 [2024-04-27 00:40:30.025808] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:56.561 [2024-04-27 00:40:30.025975] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:56.561 pt1 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.561 00:40:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.834 00:40:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:56.834 "name": "raid_bdev1", 00:20:56.834 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:20:56.834 "strip_size_kb": 0, 00:20:56.834 "state": "configuring", 00:20:56.834 "raid_level": "raid1", 00:20:56.834 "superblock": true, 00:20:56.834 "num_base_bdevs": 4, 00:20:56.834 "num_base_bdevs_discovered": 1, 00:20:56.834 "num_base_bdevs_operational": 4, 00:20:56.834 "base_bdevs_list": [ 00:20:56.834 { 00:20:56.834 "name": "pt1", 00:20:56.834 "uuid": "a0a585d9-70bd-5627-8aa0-49063d5aaba0", 00:20:56.834 "is_configured": true, 00:20:56.834 "data_offset": 2048, 00:20:56.834 "data_size": 63488 00:20:56.834 }, 00:20:56.834 { 00:20:56.834 "name": null, 00:20:56.834 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:20:56.834 "is_configured": false, 00:20:56.834 "data_offset": 2048, 00:20:56.834 "data_size": 63488 00:20:56.834 }, 00:20:56.834 { 00:20:56.834 "name": null, 00:20:56.834 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:20:56.834 "is_configured": false, 00:20:56.834 "data_offset": 2048, 00:20:56.834 "data_size": 63488 00:20:56.834 }, 00:20:56.834 { 00:20:56.834 "name": null, 00:20:56.834 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:20:56.834 "is_configured": false, 00:20:56.834 "data_offset": 2048, 00:20:56.834 "data_size": 63488 00:20:56.834 } 00:20:56.834 ] 00:20:56.834 }' 00:20:56.834 00:40:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:56.834 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:20:57.411 00:40:30 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:57.411 00:40:30 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:57.670 [2024-04-27 00:40:31.110160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:57.670 [2024-04-27 00:40:31.110453] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.670 [2024-04-27 00:40:31.110631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:57.670 [2024-04-27 00:40:31.110814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.670 [2024-04-27 00:40:31.111356] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.670 [2024-04-27 00:40:31.111533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:57.670 [2024-04-27 00:40:31.111748] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:57.670 [2024-04-27 00:40:31.111871] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:57.670 pt2 00:20:57.670 00:40:31 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:57.929 [2024-04-27 00:40:31.362298] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.929 00:40:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.188 00:40:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.188 "name": "raid_bdev1", 00:20:58.188 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:20:58.188 "strip_size_kb": 0, 00:20:58.188 "state": "configuring", 00:20:58.188 "raid_level": "raid1", 00:20:58.188 "superblock": true, 00:20:58.188 "num_base_bdevs": 4, 00:20:58.188 "num_base_bdevs_discovered": 1, 00:20:58.188 "num_base_bdevs_operational": 4, 00:20:58.188 "base_bdevs_list": [ 00:20:58.188 { 00:20:58.188 "name": "pt1", 00:20:58.188 "uuid": "a0a585d9-70bd-5627-8aa0-49063d5aaba0", 00:20:58.188 "is_configured": true, 00:20:58.188 "data_offset": 2048, 00:20:58.188 "data_size": 63488 00:20:58.188 }, 00:20:58.188 { 00:20:58.188 "name": null, 00:20:58.188 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:20:58.188 "is_configured": false, 00:20:58.188 "data_offset": 2048, 00:20:58.188 "data_size": 63488 00:20:58.188 }, 00:20:58.188 { 00:20:58.188 "name": null, 00:20:58.188 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:20:58.188 "is_configured": false, 00:20:58.188 "data_offset": 2048, 00:20:58.188 "data_size": 63488 00:20:58.188 }, 00:20:58.188 { 00:20:58.188 "name": null, 00:20:58.188 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:20:58.188 "is_configured": false, 00:20:58.188 "data_offset": 2048, 00:20:58.188 "data_size": 63488 00:20:58.188 } 00:20:58.188 ] 00:20:58.188 }' 00:20:58.188 00:40:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.188 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:20:58.755 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:58.755 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:58.755 00:40:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:59.013 [2024-04-27 00:40:32.446550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:59.013 [2024-04-27 00:40:32.446817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.013 [2024-04-27 00:40:32.446983] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:59.013 [2024-04-27 00:40:32.447138] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.013 [2024-04-27 00:40:32.447731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.013 [2024-04-27 00:40:32.447895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:59.013 [2024-04-27 00:40:32.448096] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:59.013 [2024-04-27 00:40:32.448221] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:59.013 pt2 00:20:59.013 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:59.013 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:59.013 00:40:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:59.272 [2024-04-27 00:40:32.654603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:59.272 [2024-04-27 00:40:32.654894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.272 [2024-04-27 00:40:32.655047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:59.272 [2024-04-27 00:40:32.655196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.272 [2024-04-27 00:40:32.655716] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.272 [2024-04-27 00:40:32.655884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:59.272 [2024-04-27 00:40:32.656090] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:59.272 [2024-04-27 00:40:32.656206] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:59.272 pt3 00:20:59.272 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:59.272 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:59.272 00:40:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:59.541 [2024-04-27 00:40:32.862688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:59.541 [2024-04-27 00:40:32.863000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.541 [2024-04-27 00:40:32.863080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:59.541 [2024-04-27 00:40:32.863338] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.541 [2024-04-27 00:40:32.863794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.541 [2024-04-27 00:40:32.863948] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:59.541 [2024-04-27 00:40:32.864174] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:59.541 [2024-04-27 00:40:32.864280] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:59.541 [2024-04-27 00:40:32.864536] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:20:59.541 [2024-04-27 00:40:32.864629] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:59.541 [2024-04-27 00:40:32.864763] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:59.541 [2024-04-27 00:40:32.865168] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:20:59.541 [2024-04-27 00:40:32.865272] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:20:59.541 [2024-04-27 00:40:32.865483] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.541 pt4 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.541 00:40:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.541 00:40:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.541 "name": "raid_bdev1", 00:20:59.541 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:20:59.541 "strip_size_kb": 0, 00:20:59.541 "state": "online", 00:20:59.541 "raid_level": "raid1", 00:20:59.541 "superblock": true, 00:20:59.541 "num_base_bdevs": 4, 00:20:59.541 "num_base_bdevs_discovered": 4, 00:20:59.541 "num_base_bdevs_operational": 4, 00:20:59.541 "base_bdevs_list": [ 00:20:59.541 { 00:20:59.541 "name": "pt1", 00:20:59.541 "uuid": "a0a585d9-70bd-5627-8aa0-49063d5aaba0", 00:20:59.541 "is_configured": true, 00:20:59.541 "data_offset": 2048, 00:20:59.541 "data_size": 63488 00:20:59.541 }, 00:20:59.541 { 00:20:59.541 "name": "pt2", 00:20:59.541 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:20:59.541 "is_configured": true, 00:20:59.541 "data_offset": 2048, 00:20:59.541 "data_size": 63488 00:20:59.541 }, 00:20:59.541 { 00:20:59.541 "name": "pt3", 00:20:59.541 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:20:59.541 "is_configured": true, 00:20:59.541 "data_offset": 2048, 00:20:59.541 "data_size": 63488 00:20:59.541 }, 00:20:59.541 { 00:20:59.541 "name": "pt4", 00:20:59.541 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:20:59.541 "is_configured": true, 00:20:59.541 "data_offset": 2048, 00:20:59.541 "data_size": 63488 00:20:59.541 } 00:20:59.541 ] 00:20:59.541 }' 00:20:59.541 00:40:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.541 00:40:33 -- common/autotest_common.sh@10 -- # set +x 00:21:00.474 00:40:33 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:00.474 00:40:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:21:00.474 [2024-04-27 00:40:34.019360] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:00.474 00:40:34 -- bdev/bdev_raid.sh@430 -- # '[' c944ce48-8b56-4465-a350-56830542c12a '!=' c944ce48-8b56-4465-a350-56830542c12a ']' 00:21:00.474 00:40:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:21:00.474 00:40:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:00.474 00:40:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:00.474 00:40:34 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:00.732 [2024-04-27 00:40:34.231184] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.732 00:40:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.990 00:40:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.990 "name": "raid_bdev1", 00:21:00.990 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:21:00.990 "strip_size_kb": 0, 00:21:00.990 "state": "online", 00:21:00.990 "raid_level": "raid1", 00:21:00.990 "superblock": true, 00:21:00.990 "num_base_bdevs": 4, 00:21:00.990 "num_base_bdevs_discovered": 3, 00:21:00.990 "num_base_bdevs_operational": 3, 00:21:00.990 "base_bdevs_list": [ 00:21:00.990 { 00:21:00.990 "name": null, 00:21:00.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.991 "is_configured": false, 00:21:00.991 "data_offset": 2048, 00:21:00.991 "data_size": 63488 00:21:00.991 }, 00:21:00.991 { 00:21:00.991 "name": "pt2", 00:21:00.991 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:21:00.991 "is_configured": true, 00:21:00.991 "data_offset": 2048, 00:21:00.991 "data_size": 63488 00:21:00.991 }, 00:21:00.991 { 00:21:00.991 "name": "pt3", 00:21:00.991 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:21:00.991 "is_configured": true, 00:21:00.991 "data_offset": 2048, 00:21:00.991 "data_size": 63488 00:21:00.991 }, 00:21:00.991 { 00:21:00.991 "name": "pt4", 00:21:00.991 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:21:00.991 "is_configured": true, 00:21:00.991 "data_offset": 2048, 00:21:00.991 "data_size": 63488 00:21:00.991 } 00:21:00.991 ] 00:21:00.991 }' 00:21:00.991 00:40:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.991 00:40:34 -- common/autotest_common.sh@10 -- # set +x 00:21:01.558 00:40:35 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:01.816 [2024-04-27 00:40:35.279472] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.816 [2024-04-27 00:40:35.279686] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.816 [2024-04-27 00:40:35.279867] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.816 [2024-04-27 00:40:35.280056] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.816 [2024-04-27 00:40:35.280159] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:21:01.816 00:40:35 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.816 00:40:35 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:21:02.074 00:40:35 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:21:02.074 00:40:35 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:21:02.074 00:40:35 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:21:02.074 00:40:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:02.074 00:40:35 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:02.332 00:40:35 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:02.332 00:40:35 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:02.332 00:40:35 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:02.591 00:40:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:02.591 00:40:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:02.591 00:40:36 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:02.849 00:40:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:21:02.849 00:40:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:21:02.849 00:40:36 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:21:02.849 00:40:36 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:02.849 00:40:36 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.849 [2024-04-27 00:40:36.423684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.849 [2024-04-27 00:40:36.423958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.849 [2024-04-27 00:40:36.424105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:02.849 [2024-04-27 00:40:36.424236] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.849 [2024-04-27 00:40:36.426707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.849 [2024-04-27 00:40:36.426931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.849 [2024-04-27 00:40:36.427148] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:02.849 [2024-04-27 00:40:36.427313] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.849 pt2 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:03.108 "name": "raid_bdev1", 00:21:03.108 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:21:03.108 "strip_size_kb": 0, 00:21:03.108 "state": "configuring", 00:21:03.108 "raid_level": "raid1", 00:21:03.108 "superblock": true, 00:21:03.108 "num_base_bdevs": 4, 00:21:03.108 "num_base_bdevs_discovered": 1, 00:21:03.108 "num_base_bdevs_operational": 3, 00:21:03.108 "base_bdevs_list": [ 00:21:03.108 { 00:21:03.108 "name": null, 00:21:03.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.108 "is_configured": false, 00:21:03.108 "data_offset": 2048, 00:21:03.108 "data_size": 63488 00:21:03.108 }, 00:21:03.108 { 00:21:03.108 "name": "pt2", 00:21:03.108 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:21:03.108 "is_configured": true, 00:21:03.108 "data_offset": 2048, 00:21:03.108 "data_size": 63488 00:21:03.108 }, 00:21:03.108 { 00:21:03.108 "name": null, 00:21:03.108 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:21:03.108 "is_configured": false, 00:21:03.108 "data_offset": 2048, 00:21:03.108 "data_size": 63488 00:21:03.108 }, 00:21:03.108 { 00:21:03.108 "name": null, 00:21:03.108 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:21:03.108 "is_configured": false, 00:21:03.108 "data_offset": 2048, 00:21:03.108 "data_size": 63488 00:21:03.108 } 00:21:03.108 ] 00:21:03.108 }' 00:21:03.108 00:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:03.108 00:40:36 -- common/autotest_common.sh@10 -- # set +x 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:04.049 [2024-04-27 00:40:37.512036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:04.049 [2024-04-27 00:40:37.512340] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.049 [2024-04-27 00:40:37.512494] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:04.049 [2024-04-27 00:40:37.512611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.049 [2024-04-27 00:40:37.513228] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.049 [2024-04-27 00:40:37.513408] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:04.049 [2024-04-27 00:40:37.513616] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:04.049 [2024-04-27 00:40:37.513739] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:04.049 pt3 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.049 00:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.317 00:40:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.317 "name": "raid_bdev1", 00:21:04.317 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:21:04.317 "strip_size_kb": 0, 00:21:04.317 "state": "configuring", 00:21:04.317 "raid_level": "raid1", 00:21:04.317 "superblock": true, 00:21:04.317 "num_base_bdevs": 4, 00:21:04.317 "num_base_bdevs_discovered": 2, 00:21:04.317 "num_base_bdevs_operational": 3, 00:21:04.317 "base_bdevs_list": [ 00:21:04.317 { 00:21:04.317 "name": null, 00:21:04.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.317 "is_configured": false, 00:21:04.317 "data_offset": 2048, 00:21:04.317 "data_size": 63488 00:21:04.317 }, 00:21:04.317 { 00:21:04.317 "name": "pt2", 00:21:04.317 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:21:04.317 "is_configured": true, 00:21:04.317 "data_offset": 2048, 00:21:04.317 "data_size": 63488 00:21:04.317 }, 00:21:04.317 { 00:21:04.317 "name": "pt3", 00:21:04.317 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:21:04.317 "is_configured": true, 00:21:04.317 "data_offset": 2048, 00:21:04.317 "data_size": 63488 00:21:04.317 }, 00:21:04.317 { 00:21:04.317 "name": null, 00:21:04.317 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:21:04.317 "is_configured": false, 00:21:04.317 "data_offset": 2048, 00:21:04.317 "data_size": 63488 00:21:04.317 } 00:21:04.317 ] 00:21:04.317 }' 00:21:04.317 00:40:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.317 00:40:37 -- common/autotest_common.sh@10 -- # set +x 00:21:04.884 00:40:38 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:21:04.884 00:40:38 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:21:04.884 00:40:38 -- bdev/bdev_raid.sh@462 -- # i=3 00:21:04.884 00:40:38 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:05.144 [2024-04-27 00:40:38.700322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:05.144 [2024-04-27 00:40:38.700440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.144 [2024-04-27 00:40:38.700481] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:05.144 [2024-04-27 00:40:38.700502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.144 [2024-04-27 00:40:38.701036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.144 [2024-04-27 00:40:38.701069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:05.144 [2024-04-27 00:40:38.701180] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:05.144 [2024-04-27 00:40:38.701205] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:05.144 [2024-04-27 00:40:38.701343] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:21:05.144 [2024-04-27 00:40:38.701355] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:05.144 [2024-04-27 00:40:38.701484] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:21:05.144 [2024-04-27 00:40:38.701835] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:21:05.144 [2024-04-27 00:40:38.701849] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:21:05.144 [2024-04-27 00:40:38.701996] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.144 pt4 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.144 00:40:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.404 00:40:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.404 "name": "raid_bdev1", 00:21:05.404 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:21:05.404 "strip_size_kb": 0, 00:21:05.404 "state": "online", 00:21:05.404 "raid_level": "raid1", 00:21:05.404 "superblock": true, 00:21:05.404 "num_base_bdevs": 4, 00:21:05.404 "num_base_bdevs_discovered": 3, 00:21:05.404 "num_base_bdevs_operational": 3, 00:21:05.404 "base_bdevs_list": [ 00:21:05.404 { 00:21:05.404 "name": null, 00:21:05.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.404 "is_configured": false, 00:21:05.404 "data_offset": 2048, 00:21:05.404 "data_size": 63488 00:21:05.404 }, 00:21:05.404 { 00:21:05.404 "name": "pt2", 00:21:05.404 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:21:05.404 "is_configured": true, 00:21:05.404 "data_offset": 2048, 00:21:05.404 "data_size": 63488 00:21:05.404 }, 00:21:05.404 { 00:21:05.404 "name": "pt3", 00:21:05.404 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:21:05.404 "is_configured": true, 00:21:05.404 "data_offset": 2048, 00:21:05.404 "data_size": 63488 00:21:05.404 }, 00:21:05.404 { 00:21:05.404 "name": "pt4", 00:21:05.404 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:21:05.404 "is_configured": true, 00:21:05.404 "data_offset": 2048, 00:21:05.404 "data_size": 63488 00:21:05.404 } 00:21:05.404 ] 00:21:05.404 }' 00:21:05.404 00:40:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.404 00:40:38 -- common/autotest_common.sh@10 -- # set +x 00:21:05.970 00:40:39 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:21:05.970 00:40:39 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:06.228 [2024-04-27 00:40:39.768519] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.228 [2024-04-27 00:40:39.768570] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.228 [2024-04-27 00:40:39.768644] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.228 [2024-04-27 00:40:39.768715] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.228 [2024-04-27 00:40:39.768725] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:21:06.228 00:40:39 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.228 00:40:39 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:21:06.486 00:40:39 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:21:06.486 00:40:39 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:21:06.486 00:40:40 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:06.744 [2024-04-27 00:40:40.188615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:06.744 [2024-04-27 00:40:40.188739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.744 [2024-04-27 00:40:40.188779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:06.744 [2024-04-27 00:40:40.188802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.744 [2024-04-27 00:40:40.191541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.744 [2024-04-27 00:40:40.191624] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:06.744 [2024-04-27 00:40:40.191747] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:21:06.744 [2024-04-27 00:40:40.191791] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:06.744 pt1 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.744 00:40:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.003 00:40:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.003 "name": "raid_bdev1", 00:21:07.003 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:21:07.003 "strip_size_kb": 0, 00:21:07.003 "state": "configuring", 00:21:07.003 "raid_level": "raid1", 00:21:07.003 "superblock": true, 00:21:07.003 "num_base_bdevs": 4, 00:21:07.003 "num_base_bdevs_discovered": 1, 00:21:07.003 "num_base_bdevs_operational": 4, 00:21:07.003 "base_bdevs_list": [ 00:21:07.003 { 00:21:07.003 "name": "pt1", 00:21:07.003 "uuid": "a0a585d9-70bd-5627-8aa0-49063d5aaba0", 00:21:07.003 "is_configured": true, 00:21:07.003 "data_offset": 2048, 00:21:07.003 "data_size": 63488 00:21:07.003 }, 00:21:07.003 { 00:21:07.003 "name": null, 00:21:07.003 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:21:07.003 "is_configured": false, 00:21:07.003 "data_offset": 2048, 00:21:07.003 "data_size": 63488 00:21:07.003 }, 00:21:07.003 { 00:21:07.003 "name": null, 00:21:07.003 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:21:07.003 "is_configured": false, 00:21:07.003 "data_offset": 2048, 00:21:07.003 "data_size": 63488 00:21:07.003 }, 00:21:07.003 { 00:21:07.003 "name": null, 00:21:07.003 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:21:07.003 "is_configured": false, 00:21:07.003 "data_offset": 2048, 00:21:07.003 "data_size": 63488 00:21:07.003 } 00:21:07.003 ] 00:21:07.003 }' 00:21:07.003 00:40:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.003 00:40:40 -- common/autotest_common.sh@10 -- # set +x 00:21:07.569 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:21:07.569 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:07.569 00:40:41 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:07.827 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:07.827 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:07.827 00:40:41 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:08.086 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:08.086 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:08.086 00:40:41 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:21:08.344 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:21:08.344 00:40:41 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:21:08.344 00:40:41 -- bdev/bdev_raid.sh@489 -- # i=3 00:21:08.344 00:40:41 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:08.603 [2024-04-27 00:40:42.013008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:08.603 [2024-04-27 00:40:42.013111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.603 [2024-04-27 00:40:42.013145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:08.603 [2024-04-27 00:40:42.013172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.603 [2024-04-27 00:40:42.013706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.603 [2024-04-27 00:40:42.013755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:08.603 [2024-04-27 00:40:42.013860] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:21:08.603 [2024-04-27 00:40:42.013874] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:08.603 [2024-04-27 00:40:42.013881] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.603 [2024-04-27 00:40:42.013902] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:21:08.603 [2024-04-27 00:40:42.013968] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:08.603 pt4 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.603 00:40:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.861 00:40:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.861 "name": "raid_bdev1", 00:21:08.861 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:21:08.861 "strip_size_kb": 0, 00:21:08.861 "state": "configuring", 00:21:08.861 "raid_level": "raid1", 00:21:08.861 "superblock": true, 00:21:08.861 "num_base_bdevs": 4, 00:21:08.861 "num_base_bdevs_discovered": 1, 00:21:08.861 "num_base_bdevs_operational": 3, 00:21:08.861 "base_bdevs_list": [ 00:21:08.861 { 00:21:08.861 "name": null, 00:21:08.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.861 "is_configured": false, 00:21:08.861 "data_offset": 2048, 00:21:08.861 "data_size": 63488 00:21:08.861 }, 00:21:08.861 { 00:21:08.861 "name": null, 00:21:08.861 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:21:08.861 "is_configured": false, 00:21:08.861 "data_offset": 2048, 00:21:08.861 "data_size": 63488 00:21:08.861 }, 00:21:08.861 { 00:21:08.861 "name": null, 00:21:08.861 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:21:08.861 "is_configured": false, 00:21:08.861 "data_offset": 2048, 00:21:08.861 "data_size": 63488 00:21:08.861 }, 00:21:08.861 { 00:21:08.861 "name": "pt4", 00:21:08.861 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:21:08.861 "is_configured": true, 00:21:08.861 "data_offset": 2048, 00:21:08.861 "data_size": 63488 00:21:08.861 } 00:21:08.861 ] 00:21:08.861 }' 00:21:08.861 00:40:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.861 00:40:42 -- common/autotest_common.sh@10 -- # set +x 00:21:09.427 00:40:42 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:21:09.427 00:40:42 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:09.427 00:40:42 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.684 [2024-04-27 00:40:43.054995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.684 [2024-04-27 00:40:43.055143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.684 [2024-04-27 00:40:43.055184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:09.684 [2024-04-27 00:40:43.055227] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.684 [2024-04-27 00:40:43.055856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.684 [2024-04-27 00:40:43.055960] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.684 [2024-04-27 00:40:43.056071] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:21:09.684 [2024-04-27 00:40:43.056111] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.684 pt2 00:21:09.684 00:40:43 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:21:09.684 00:40:43 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:09.685 00:40:43 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:09.942 [2024-04-27 00:40:43.323117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:09.942 [2024-04-27 00:40:43.323267] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.942 [2024-04-27 00:40:43.323304] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:09.942 [2024-04-27 00:40:43.323348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.942 [2024-04-27 00:40:43.323973] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.942 [2024-04-27 00:40:43.324080] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:09.942 [2024-04-27 00:40:43.324194] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:21:09.942 [2024-04-27 00:40:43.324246] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:09.942 [2024-04-27 00:40:43.324443] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:21:09.942 [2024-04-27 00:40:43.324476] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:09.942 [2024-04-27 00:40:43.324621] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:21:09.942 [2024-04-27 00:40:43.325040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:21:09.942 [2024-04-27 00:40:43.325070] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:21:09.942 [2024-04-27 00:40:43.325293] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.942 pt3 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.942 00:40:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.201 00:40:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.201 "name": "raid_bdev1", 00:21:10.201 "uuid": "c944ce48-8b56-4465-a350-56830542c12a", 00:21:10.201 "strip_size_kb": 0, 00:21:10.201 "state": "online", 00:21:10.201 "raid_level": "raid1", 00:21:10.201 "superblock": true, 00:21:10.201 "num_base_bdevs": 4, 00:21:10.201 "num_base_bdevs_discovered": 3, 00:21:10.201 "num_base_bdevs_operational": 3, 00:21:10.201 "base_bdevs_list": [ 00:21:10.201 { 00:21:10.201 "name": null, 00:21:10.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.201 "is_configured": false, 00:21:10.201 "data_offset": 2048, 00:21:10.201 "data_size": 63488 00:21:10.201 }, 00:21:10.201 { 00:21:10.201 "name": "pt2", 00:21:10.201 "uuid": "0cd7dd4c-ddd9-56dd-a62a-90885c0e0cdf", 00:21:10.201 "is_configured": true, 00:21:10.201 "data_offset": 2048, 00:21:10.201 "data_size": 63488 00:21:10.201 }, 00:21:10.201 { 00:21:10.201 "name": "pt3", 00:21:10.201 "uuid": "df5c14b3-eb24-5e02-a2b6-82ef4de16190", 00:21:10.201 "is_configured": true, 00:21:10.201 "data_offset": 2048, 00:21:10.201 "data_size": 63488 00:21:10.201 }, 00:21:10.201 { 00:21:10.201 "name": "pt4", 00:21:10.201 "uuid": "09cfea2c-a1ab-5fe1-8ce5-07e8ee95654d", 00:21:10.201 "is_configured": true, 00:21:10.201 "data_offset": 2048, 00:21:10.201 "data_size": 63488 00:21:10.201 } 00:21:10.201 ] 00:21:10.201 }' 00:21:10.201 00:40:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.201 00:40:43 -- common/autotest_common.sh@10 -- # set +x 00:21:10.767 00:40:44 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:10.767 00:40:44 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:21:11.026 [2024-04-27 00:40:44.459637] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.026 00:40:44 -- bdev/bdev_raid.sh@506 -- # '[' c944ce48-8b56-4465-a350-56830542c12a '!=' c944ce48-8b56-4465-a350-56830542c12a ']' 00:21:11.026 00:40:44 -- bdev/bdev_raid.sh@511 -- # killprocess 129472 00:21:11.026 00:40:44 -- common/autotest_common.sh@936 -- # '[' -z 129472 ']' 00:21:11.026 00:40:44 -- common/autotest_common.sh@940 -- # kill -0 129472 00:21:11.026 00:40:44 -- common/autotest_common.sh@941 -- # uname 00:21:11.026 00:40:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:11.026 00:40:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 129472 00:21:11.026 00:40:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:11.026 00:40:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:11.026 killing process with pid 129472 00:21:11.026 00:40:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 129472' 00:21:11.026 00:40:44 -- common/autotest_common.sh@955 -- # kill 129472 00:21:11.026 [2024-04-27 00:40:44.494809] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.026 [2024-04-27 00:40:44.494888] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.026 00:40:44 -- common/autotest_common.sh@960 -- # wait 129472 00:21:11.026 [2024-04-27 00:40:44.494993] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.026 [2024-04-27 00:40:44.495007] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:21:11.295 [2024-04-27 00:40:44.776567] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:12.245 00:40:45 -- bdev/bdev_raid.sh@513 -- # return 0 00:21:12.245 00:21:12.245 real 0m22.334s 00:21:12.245 user 0m41.224s 00:21:12.245 sys 0m2.375s 00:21:12.245 00:40:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:12.245 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.245 ************************************ 00:21:12.245 END TEST raid_superblock_test 00:21:12.245 ************************************ 00:21:12.245 00:40:45 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:21:12.245 00:40:45 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:12.245 00:40:45 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:21:12.246 00:40:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:12.246 00:40:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:12.246 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.504 ************************************ 00:21:12.504 START TEST raid_rebuild_test 00:21:12.504 ************************************ 00:21:12.504 00:40:45 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false false 00:21:12.504 00:40:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:12.504 00:40:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:12.504 00:40:45 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:12.504 00:40:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:12.504 00:40:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:12.504 00:40:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=130156 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 130156 /var/tmp/spdk-raid.sock 00:21:12.505 00:40:45 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:12.505 00:40:45 -- common/autotest_common.sh@817 -- # '[' -z 130156 ']' 00:21:12.505 00:40:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:12.505 00:40:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:12.505 00:40:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:12.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:12.505 00:40:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:12.505 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.505 [2024-04-27 00:40:45.911486] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:12.505 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:12.505 Zero copy mechanism will not be used. 00:21:12.505 [2024-04-27 00:40:45.911678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130156 ] 00:21:12.505 [2024-04-27 00:40:46.081148] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.763 [2024-04-27 00:40:46.313578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.022 [2024-04-27 00:40:46.494848] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:13.281 00:40:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:13.281 00:40:46 -- common/autotest_common.sh@850 -- # return 0 00:21:13.281 00:40:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:13.281 00:40:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:13.281 00:40:46 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:13.539 BaseBdev1 00:21:13.539 00:40:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:13.539 00:40:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:13.539 00:40:47 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:13.798 BaseBdev2 00:21:13.798 00:40:47 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:14.057 spare_malloc 00:21:14.057 00:40:47 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:14.316 spare_delay 00:21:14.316 00:40:47 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:14.575 [2024-04-27 00:40:48.020743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:14.575 [2024-04-27 00:40:48.020860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:14.575 [2024-04-27 00:40:48.020902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:14.575 [2024-04-27 00:40:48.020951] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:14.575 [2024-04-27 00:40:48.023576] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:14.575 [2024-04-27 00:40:48.023650] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:14.575 spare 00:21:14.575 00:40:48 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:14.833 [2024-04-27 00:40:48.228839] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:14.833 [2024-04-27 00:40:48.230898] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:14.833 [2024-04-27 00:40:48.231001] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:21:14.833 [2024-04-27 00:40:48.231014] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:14.833 [2024-04-27 00:40:48.231146] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:14.833 [2024-04-27 00:40:48.231510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:21:14.834 [2024-04-27 00:40:48.231535] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:21:14.834 [2024-04-27 00:40:48.231743] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.834 00:40:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.092 00:40:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:15.092 "name": "raid_bdev1", 00:21:15.092 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:15.092 "strip_size_kb": 0, 00:21:15.092 "state": "online", 00:21:15.092 "raid_level": "raid1", 00:21:15.092 "superblock": false, 00:21:15.092 "num_base_bdevs": 2, 00:21:15.092 "num_base_bdevs_discovered": 2, 00:21:15.092 "num_base_bdevs_operational": 2, 00:21:15.092 "base_bdevs_list": [ 00:21:15.092 { 00:21:15.092 "name": "BaseBdev1", 00:21:15.092 "uuid": "768455c8-e675-4d59-9fb6-58c5499261bf", 00:21:15.092 "is_configured": true, 00:21:15.092 "data_offset": 0, 00:21:15.092 "data_size": 65536 00:21:15.092 }, 00:21:15.092 { 00:21:15.092 "name": "BaseBdev2", 00:21:15.092 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:15.092 "is_configured": true, 00:21:15.092 "data_offset": 0, 00:21:15.092 "data_size": 65536 00:21:15.092 } 00:21:15.092 ] 00:21:15.092 }' 00:21:15.092 00:40:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:15.092 00:40:48 -- common/autotest_common.sh@10 -- # set +x 00:21:15.659 00:40:49 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:15.659 00:40:49 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:15.917 [2024-04-27 00:40:49.365263] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:15.917 00:40:49 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:15.917 00:40:49 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.917 00:40:49 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:16.176 00:40:49 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:16.176 00:40:49 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:16.176 00:40:49 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:16.176 00:40:49 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@12 -- # local i 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:16.176 00:40:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:16.434 [2024-04-27 00:40:49.849210] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:16.434 /dev/nbd0 00:21:16.434 00:40:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:16.434 00:40:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:16.434 00:40:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:16.434 00:40:49 -- common/autotest_common.sh@855 -- # local i 00:21:16.434 00:40:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:16.434 00:40:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:16.434 00:40:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:16.434 00:40:49 -- common/autotest_common.sh@859 -- # break 00:21:16.434 00:40:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:16.434 00:40:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:16.434 00:40:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:16.434 1+0 records in 00:21:16.434 1+0 records out 00:21:16.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078956 s, 5.2 MB/s 00:21:16.434 00:40:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.434 00:40:49 -- common/autotest_common.sh@872 -- # size=4096 00:21:16.434 00:40:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:16.434 00:40:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:16.434 00:40:49 -- common/autotest_common.sh@875 -- # return 0 00:21:16.434 00:40:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:16.434 00:40:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:16.434 00:40:49 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:16.434 00:40:49 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:16.434 00:40:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:21.698 65536+0 records in 00:21:21.698 65536+0 records out 00:21:21.698 33554432 bytes (34 MB, 32 MiB) copied, 5.0262 s, 6.7 MB/s 00:21:21.698 00:40:54 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:21.698 00:40:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:21.698 00:40:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:21.698 00:40:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:21.698 00:40:54 -- bdev/nbd_common.sh@51 -- # local i 00:21:21.698 00:40:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:21.698 00:40:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@41 -- # break 00:21:21.698 [2024-04-27 00:40:55.215043] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.698 00:40:55 -- bdev/nbd_common.sh@45 -- # return 0 00:21:21.698 00:40:55 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:21.956 [2024-04-27 00:40:55.466960] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.956 00:40:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.283 00:40:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:22.283 "name": "raid_bdev1", 00:21:22.283 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:22.283 "strip_size_kb": 0, 00:21:22.283 "state": "online", 00:21:22.283 "raid_level": "raid1", 00:21:22.283 "superblock": false, 00:21:22.283 "num_base_bdevs": 2, 00:21:22.283 "num_base_bdevs_discovered": 1, 00:21:22.283 "num_base_bdevs_operational": 1, 00:21:22.283 "base_bdevs_list": [ 00:21:22.283 { 00:21:22.283 "name": null, 00:21:22.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.283 "is_configured": false, 00:21:22.283 "data_offset": 0, 00:21:22.283 "data_size": 65536 00:21:22.283 }, 00:21:22.283 { 00:21:22.283 "name": "BaseBdev2", 00:21:22.283 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:22.283 "is_configured": true, 00:21:22.283 "data_offset": 0, 00:21:22.283 "data_size": 65536 00:21:22.283 } 00:21:22.283 ] 00:21:22.283 }' 00:21:22.283 00:40:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:22.283 00:40:55 -- common/autotest_common.sh@10 -- # set +x 00:21:22.849 00:40:56 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:23.107 [2024-04-27 00:40:56.579250] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:23.107 [2024-04-27 00:40:56.579311] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.107 [2024-04-27 00:40:56.591873] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:21:23.107 [2024-04-27 00:40:56.593807] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.107 00:40:56 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:24.040 00:40:57 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.040 00:40:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.040 00:40:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.040 00:40:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.040 00:40:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.040 00:40:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.040 00:40:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.298 00:40:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.298 "name": "raid_bdev1", 00:21:24.298 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:24.298 "strip_size_kb": 0, 00:21:24.298 "state": "online", 00:21:24.298 "raid_level": "raid1", 00:21:24.298 "superblock": false, 00:21:24.298 "num_base_bdevs": 2, 00:21:24.298 "num_base_bdevs_discovered": 2, 00:21:24.298 "num_base_bdevs_operational": 2, 00:21:24.298 "process": { 00:21:24.298 "type": "rebuild", 00:21:24.298 "target": "spare", 00:21:24.298 "progress": { 00:21:24.298 "blocks": 24576, 00:21:24.298 "percent": 37 00:21:24.298 } 00:21:24.298 }, 00:21:24.298 "base_bdevs_list": [ 00:21:24.298 { 00:21:24.298 "name": "spare", 00:21:24.298 "uuid": "5e45d14f-242f-569a-a26a-0def3c713cdb", 00:21:24.298 "is_configured": true, 00:21:24.298 "data_offset": 0, 00:21:24.298 "data_size": 65536 00:21:24.298 }, 00:21:24.298 { 00:21:24.298 "name": "BaseBdev2", 00:21:24.298 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:24.298 "is_configured": true, 00:21:24.298 "data_offset": 0, 00:21:24.298 "data_size": 65536 00:21:24.298 } 00:21:24.298 ] 00:21:24.298 }' 00:21:24.298 00:40:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.556 00:40:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.556 00:40:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.556 00:40:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.556 00:40:57 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:24.815 [2024-04-27 00:40:58.212078] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.815 [2024-04-27 00:40:58.303084] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:24.815 [2024-04-27 00:40:58.303175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.815 00:40:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.077 00:40:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.077 "name": "raid_bdev1", 00:21:25.077 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:25.077 "strip_size_kb": 0, 00:21:25.077 "state": "online", 00:21:25.077 "raid_level": "raid1", 00:21:25.077 "superblock": false, 00:21:25.077 "num_base_bdevs": 2, 00:21:25.077 "num_base_bdevs_discovered": 1, 00:21:25.077 "num_base_bdevs_operational": 1, 00:21:25.077 "base_bdevs_list": [ 00:21:25.077 { 00:21:25.077 "name": null, 00:21:25.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.077 "is_configured": false, 00:21:25.077 "data_offset": 0, 00:21:25.077 "data_size": 65536 00:21:25.077 }, 00:21:25.077 { 00:21:25.077 "name": "BaseBdev2", 00:21:25.077 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:25.077 "is_configured": true, 00:21:25.077 "data_offset": 0, 00:21:25.077 "data_size": 65536 00:21:25.077 } 00:21:25.077 ] 00:21:25.077 }' 00:21:25.077 00:40:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.077 00:40:58 -- common/autotest_common.sh@10 -- # set +x 00:21:25.641 00:40:59 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.641 00:40:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.641 00:40:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:25.641 00:40:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:25.641 00:40:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.641 00:40:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.641 00:40:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.898 00:40:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.898 "name": "raid_bdev1", 00:21:25.898 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:25.898 "strip_size_kb": 0, 00:21:25.898 "state": "online", 00:21:25.898 "raid_level": "raid1", 00:21:25.898 "superblock": false, 00:21:25.898 "num_base_bdevs": 2, 00:21:25.898 "num_base_bdevs_discovered": 1, 00:21:25.898 "num_base_bdevs_operational": 1, 00:21:25.898 "base_bdevs_list": [ 00:21:25.898 { 00:21:25.898 "name": null, 00:21:25.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.898 "is_configured": false, 00:21:25.898 "data_offset": 0, 00:21:25.898 "data_size": 65536 00:21:25.898 }, 00:21:25.898 { 00:21:25.898 "name": "BaseBdev2", 00:21:25.898 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:25.898 "is_configured": true, 00:21:25.898 "data_offset": 0, 00:21:25.898 "data_size": 65536 00:21:25.898 } 00:21:25.898 ] 00:21:25.898 }' 00:21:25.898 00:40:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.898 00:40:59 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:25.898 00:40:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.155 00:40:59 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:26.155 00:40:59 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:26.156 [2024-04-27 00:40:59.689983] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:26.156 [2024-04-27 00:40:59.690030] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.156 [2024-04-27 00:40:59.702661] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:21:26.156 [2024-04-27 00:40:59.704673] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:26.156 00:40:59 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.530 "name": "raid_bdev1", 00:21:27.530 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:27.530 "strip_size_kb": 0, 00:21:27.530 "state": "online", 00:21:27.530 "raid_level": "raid1", 00:21:27.530 "superblock": false, 00:21:27.530 "num_base_bdevs": 2, 00:21:27.530 "num_base_bdevs_discovered": 2, 00:21:27.530 "num_base_bdevs_operational": 2, 00:21:27.530 "process": { 00:21:27.530 "type": "rebuild", 00:21:27.530 "target": "spare", 00:21:27.530 "progress": { 00:21:27.530 "blocks": 24576, 00:21:27.530 "percent": 37 00:21:27.530 } 00:21:27.530 }, 00:21:27.530 "base_bdevs_list": [ 00:21:27.530 { 00:21:27.530 "name": "spare", 00:21:27.530 "uuid": "5e45d14f-242f-569a-a26a-0def3c713cdb", 00:21:27.530 "is_configured": true, 00:21:27.530 "data_offset": 0, 00:21:27.530 "data_size": 65536 00:21:27.530 }, 00:21:27.530 { 00:21:27.530 "name": "BaseBdev2", 00:21:27.530 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:27.530 "is_configured": true, 00:21:27.530 "data_offset": 0, 00:21:27.530 "data_size": 65536 00:21:27.530 } 00:21:27.530 ] 00:21:27.530 }' 00:21:27.530 00:41:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@657 -- # local timeout=411 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.530 00:41:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.789 00:41:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.789 "name": "raid_bdev1", 00:21:27.789 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:27.789 "strip_size_kb": 0, 00:21:27.789 "state": "online", 00:21:27.789 "raid_level": "raid1", 00:21:27.789 "superblock": false, 00:21:27.789 "num_base_bdevs": 2, 00:21:27.789 "num_base_bdevs_discovered": 2, 00:21:27.789 "num_base_bdevs_operational": 2, 00:21:27.789 "process": { 00:21:27.789 "type": "rebuild", 00:21:27.789 "target": "spare", 00:21:27.789 "progress": { 00:21:27.789 "blocks": 30720, 00:21:27.789 "percent": 46 00:21:27.789 } 00:21:27.789 }, 00:21:27.789 "base_bdevs_list": [ 00:21:27.789 { 00:21:27.789 "name": "spare", 00:21:27.789 "uuid": "5e45d14f-242f-569a-a26a-0def3c713cdb", 00:21:27.789 "is_configured": true, 00:21:27.789 "data_offset": 0, 00:21:27.789 "data_size": 65536 00:21:27.789 }, 00:21:27.789 { 00:21:27.789 "name": "BaseBdev2", 00:21:27.789 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:27.789 "is_configured": true, 00:21:27.789 "data_offset": 0, 00:21:27.789 "data_size": 65536 00:21:27.789 } 00:21:27.789 ] 00:21:27.789 }' 00:21:27.789 00:41:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.048 00:41:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.048 00:41:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.048 00:41:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.048 00:41:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.983 00:41:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.240 00:41:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.240 "name": "raid_bdev1", 00:21:29.240 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:29.240 "strip_size_kb": 0, 00:21:29.240 "state": "online", 00:21:29.240 "raid_level": "raid1", 00:21:29.240 "superblock": false, 00:21:29.240 "num_base_bdevs": 2, 00:21:29.240 "num_base_bdevs_discovered": 2, 00:21:29.240 "num_base_bdevs_operational": 2, 00:21:29.240 "process": { 00:21:29.240 "type": "rebuild", 00:21:29.240 "target": "spare", 00:21:29.240 "progress": { 00:21:29.240 "blocks": 59392, 00:21:29.240 "percent": 90 00:21:29.240 } 00:21:29.240 }, 00:21:29.240 "base_bdevs_list": [ 00:21:29.240 { 00:21:29.240 "name": "spare", 00:21:29.240 "uuid": "5e45d14f-242f-569a-a26a-0def3c713cdb", 00:21:29.240 "is_configured": true, 00:21:29.240 "data_offset": 0, 00:21:29.240 "data_size": 65536 00:21:29.240 }, 00:21:29.240 { 00:21:29.240 "name": "BaseBdev2", 00:21:29.240 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:29.240 "is_configured": true, 00:21:29.240 "data_offset": 0, 00:21:29.240 "data_size": 65536 00:21:29.240 } 00:21:29.240 ] 00:21:29.240 }' 00:21:29.240 00:41:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.240 00:41:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.240 00:41:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.240 00:41:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.240 00:41:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:29.498 [2024-04-27 00:41:02.921812] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:29.498 [2024-04-27 00:41:02.921897] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:29.498 [2024-04-27 00:41:02.921982] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.433 00:41:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.692 "name": "raid_bdev1", 00:21:30.692 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:30.692 "strip_size_kb": 0, 00:21:30.692 "state": "online", 00:21:30.692 "raid_level": "raid1", 00:21:30.692 "superblock": false, 00:21:30.692 "num_base_bdevs": 2, 00:21:30.692 "num_base_bdevs_discovered": 2, 00:21:30.692 "num_base_bdevs_operational": 2, 00:21:30.692 "base_bdevs_list": [ 00:21:30.692 { 00:21:30.692 "name": "spare", 00:21:30.692 "uuid": "5e45d14f-242f-569a-a26a-0def3c713cdb", 00:21:30.692 "is_configured": true, 00:21:30.692 "data_offset": 0, 00:21:30.692 "data_size": 65536 00:21:30.692 }, 00:21:30.692 { 00:21:30.692 "name": "BaseBdev2", 00:21:30.692 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:30.692 "is_configured": true, 00:21:30.692 "data_offset": 0, 00:21:30.692 "data_size": 65536 00:21:30.692 } 00:21:30.692 ] 00:21:30.692 }' 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@660 -- # break 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.692 00:41:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.950 00:41:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.951 "name": "raid_bdev1", 00:21:30.951 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:30.951 "strip_size_kb": 0, 00:21:30.951 "state": "online", 00:21:30.951 "raid_level": "raid1", 00:21:30.951 "superblock": false, 00:21:30.951 "num_base_bdevs": 2, 00:21:30.951 "num_base_bdevs_discovered": 2, 00:21:30.951 "num_base_bdevs_operational": 2, 00:21:30.951 "base_bdevs_list": [ 00:21:30.951 { 00:21:30.951 "name": "spare", 00:21:30.951 "uuid": "5e45d14f-242f-569a-a26a-0def3c713cdb", 00:21:30.951 "is_configured": true, 00:21:30.951 "data_offset": 0, 00:21:30.951 "data_size": 65536 00:21:30.951 }, 00:21:30.951 { 00:21:30.951 "name": "BaseBdev2", 00:21:30.951 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:30.951 "is_configured": true, 00:21:30.951 "data_offset": 0, 00:21:30.951 "data_size": 65536 00:21:30.951 } 00:21:30.951 ] 00:21:30.951 }' 00:21:30.951 00:41:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.951 00:41:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.951 00:41:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.212 00:41:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.471 00:41:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.471 "name": "raid_bdev1", 00:21:31.471 "uuid": "b673dd16-452e-40de-9524-f200bd3f52c9", 00:21:31.471 "strip_size_kb": 0, 00:21:31.471 "state": "online", 00:21:31.471 "raid_level": "raid1", 00:21:31.471 "superblock": false, 00:21:31.471 "num_base_bdevs": 2, 00:21:31.471 "num_base_bdevs_discovered": 2, 00:21:31.471 "num_base_bdevs_operational": 2, 00:21:31.471 "base_bdevs_list": [ 00:21:31.471 { 00:21:31.471 "name": "spare", 00:21:31.471 "uuid": "5e45d14f-242f-569a-a26a-0def3c713cdb", 00:21:31.471 "is_configured": true, 00:21:31.471 "data_offset": 0, 00:21:31.471 "data_size": 65536 00:21:31.471 }, 00:21:31.471 { 00:21:31.471 "name": "BaseBdev2", 00:21:31.471 "uuid": "5c951bfd-13c0-4114-acf6-1c2428d677a5", 00:21:31.471 "is_configured": true, 00:21:31.471 "data_offset": 0, 00:21:31.471 "data_size": 65536 00:21:31.471 } 00:21:31.471 ] 00:21:31.471 }' 00:21:31.471 00:41:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.471 00:41:04 -- common/autotest_common.sh@10 -- # set +x 00:21:32.038 00:41:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:32.038 [2024-04-27 00:41:05.563475] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:32.038 [2024-04-27 00:41:05.563506] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:32.038 [2024-04-27 00:41:05.563639] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.038 [2024-04-27 00:41:05.563708] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.038 [2024-04-27 00:41:05.563720] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:21:32.038 00:41:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.038 00:41:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:32.297 00:41:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:32.297 00:41:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:32.297 00:41:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@12 -- # local i 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:32.297 00:41:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:32.555 /dev/nbd0 00:21:32.814 00:41:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:32.814 00:41:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:32.814 00:41:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:32.814 00:41:06 -- common/autotest_common.sh@855 -- # local i 00:21:32.814 00:41:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:32.814 00:41:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:32.814 00:41:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:32.814 00:41:06 -- common/autotest_common.sh@859 -- # break 00:21:32.814 00:41:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:32.814 00:41:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:32.814 00:41:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:32.814 1+0 records in 00:21:32.814 1+0 records out 00:21:32.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307847 s, 13.3 MB/s 00:21:32.814 00:41:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.814 00:41:06 -- common/autotest_common.sh@872 -- # size=4096 00:21:32.814 00:41:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.814 00:41:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:32.814 00:41:06 -- common/autotest_common.sh@875 -- # return 0 00:21:32.814 00:41:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:32.814 00:41:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:32.814 00:41:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:33.072 /dev/nbd1 00:21:33.072 00:41:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:33.072 00:41:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:33.073 00:41:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:21:33.073 00:41:06 -- common/autotest_common.sh@855 -- # local i 00:21:33.073 00:41:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:33.073 00:41:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:33.073 00:41:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:21:33.073 00:41:06 -- common/autotest_common.sh@859 -- # break 00:21:33.073 00:41:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:33.073 00:41:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:33.073 00:41:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:33.073 1+0 records in 00:21:33.073 1+0 records out 00:21:33.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344173 s, 11.9 MB/s 00:21:33.073 00:41:06 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:33.073 00:41:06 -- common/autotest_common.sh@872 -- # size=4096 00:21:33.073 00:41:06 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:33.073 00:41:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:33.073 00:41:06 -- common/autotest_common.sh@875 -- # return 0 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:33.073 00:41:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:33.073 00:41:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@51 -- # local i 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:33.073 00:41:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@41 -- # break 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@45 -- # return 0 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:33.331 00:41:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@41 -- # break 00:21:33.589 00:41:07 -- bdev/nbd_common.sh@45 -- # return 0 00:21:33.589 00:41:07 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:33.589 00:41:07 -- bdev/bdev_raid.sh@709 -- # killprocess 130156 00:21:33.589 00:41:07 -- common/autotest_common.sh@936 -- # '[' -z 130156 ']' 00:21:33.589 00:41:07 -- common/autotest_common.sh@940 -- # kill -0 130156 00:21:33.589 00:41:07 -- common/autotest_common.sh@941 -- # uname 00:21:33.589 00:41:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.589 00:41:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130156 00:21:33.847 killing process with pid 130156 00:21:33.847 Received shutdown signal, test time was about 60.000000 seconds 00:21:33.847 00:21:33.847 Latency(us) 00:21:33.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.847 =================================================================================================================== 00:21:33.847 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:33.847 00:41:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:33.847 00:41:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:33.847 00:41:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130156' 00:21:33.847 00:41:07 -- common/autotest_common.sh@955 -- # kill 130156 00:21:33.848 [2024-04-27 00:41:07.186858] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.848 00:41:07 -- common/autotest_common.sh@960 -- # wait 130156 00:21:33.848 [2024-04-27 00:41:07.394485] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:35.235 ************************************ 00:21:35.235 END TEST raid_rebuild_test 00:21:35.235 ************************************ 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:35.235 00:21:35.235 real 0m22.533s 00:21:35.235 user 0m31.283s 00:21:35.235 sys 0m3.743s 00:21:35.235 00:41:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:35.235 00:41:08 -- common/autotest_common.sh@10 -- # set +x 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:21:35.235 00:41:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:35.235 00:41:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.235 00:41:08 -- common/autotest_common.sh@10 -- # set +x 00:21:35.235 ************************************ 00:21:35.235 START TEST raid_rebuild_test_sb 00:21:35.235 ************************************ 00:21:35.235 00:41:08 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true false 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@544 -- # raid_pid=130709 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@545 -- # waitforlisten 130709 /var/tmp/spdk-raid.sock 00:21:35.235 00:41:08 -- common/autotest_common.sh@817 -- # '[' -z 130709 ']' 00:21:35.235 00:41:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:35.235 00:41:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:35.235 00:41:08 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:35.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:35.235 00:41:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:35.235 00:41:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:35.235 00:41:08 -- common/autotest_common.sh@10 -- # set +x 00:21:35.235 [2024-04-27 00:41:08.532964] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:21:35.235 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:35.235 Zero copy mechanism will not be used. 00:21:35.235 [2024-04-27 00:41:08.533185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130709 ] 00:21:35.235 [2024-04-27 00:41:08.692940] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.509 [2024-04-27 00:41:08.879821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.510 [2024-04-27 00:41:09.053753] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.076 00:41:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.076 00:41:09 -- common/autotest_common.sh@850 -- # return 0 00:21:36.076 00:41:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:36.076 00:41:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:36.076 00:41:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:36.334 BaseBdev1_malloc 00:21:36.334 00:41:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:36.593 [2024-04-27 00:41:09.976765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:36.593 [2024-04-27 00:41:09.976877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.593 [2024-04-27 00:41:09.976915] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:36.593 [2024-04-27 00:41:09.976960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.593 [2024-04-27 00:41:09.979422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.593 [2024-04-27 00:41:09.979472] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:36.593 BaseBdev1 00:21:36.593 00:41:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:36.593 00:41:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:36.593 00:41:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:36.852 BaseBdev2_malloc 00:21:36.852 00:41:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:37.110 [2024-04-27 00:41:10.512953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:37.110 [2024-04-27 00:41:10.513066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.110 [2024-04-27 00:41:10.513114] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:37.110 [2024-04-27 00:41:10.513165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.110 [2024-04-27 00:41:10.515612] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.110 [2024-04-27 00:41:10.515675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:37.110 BaseBdev2 00:21:37.110 00:41:10 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:37.369 spare_malloc 00:21:37.369 00:41:10 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:37.627 spare_delay 00:21:37.627 00:41:10 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:37.886 [2024-04-27 00:41:11.248347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:37.886 [2024-04-27 00:41:11.248460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.886 [2024-04-27 00:41:11.248508] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:37.886 [2024-04-27 00:41:11.248553] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.886 [2024-04-27 00:41:11.251351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.886 [2024-04-27 00:41:11.251422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:37.886 spare 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:37.886 [2024-04-27 00:41:11.452409] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.886 [2024-04-27 00:41:11.454472] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.886 [2024-04-27 00:41:11.454782] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:21:37.886 [2024-04-27 00:41:11.454798] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:37.886 [2024-04-27 00:41:11.454994] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:37.886 [2024-04-27 00:41:11.455368] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:21:37.886 [2024-04-27 00:41:11.455394] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:21:37.886 [2024-04-27 00:41:11.455568] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.886 00:41:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.146 00:41:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:38.146 "name": "raid_bdev1", 00:21:38.146 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:38.146 "strip_size_kb": 0, 00:21:38.146 "state": "online", 00:21:38.146 "raid_level": "raid1", 00:21:38.146 "superblock": true, 00:21:38.146 "num_base_bdevs": 2, 00:21:38.146 "num_base_bdevs_discovered": 2, 00:21:38.146 "num_base_bdevs_operational": 2, 00:21:38.146 "base_bdevs_list": [ 00:21:38.146 { 00:21:38.146 "name": "BaseBdev1", 00:21:38.146 "uuid": "8fa3aa82-a3f1-5031-8af0-f8dcb5394565", 00:21:38.146 "is_configured": true, 00:21:38.146 "data_offset": 2048, 00:21:38.146 "data_size": 63488 00:21:38.146 }, 00:21:38.146 { 00:21:38.146 "name": "BaseBdev2", 00:21:38.146 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:38.146 "is_configured": true, 00:21:38.146 "data_offset": 2048, 00:21:38.146 "data_size": 63488 00:21:38.146 } 00:21:38.146 ] 00:21:38.146 }' 00:21:38.146 00:41:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:38.146 00:41:11 -- common/autotest_common.sh@10 -- # set +x 00:21:38.714 00:41:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:38.714 00:41:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:38.973 [2024-04-27 00:41:12.516917] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:38.973 00:41:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:38.973 00:41:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.973 00:41:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:39.232 00:41:12 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:39.232 00:41:12 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:39.232 00:41:12 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:39.232 00:41:12 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@12 -- # local i 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.232 00:41:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:39.490 [2024-04-27 00:41:12.948724] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:39.490 /dev/nbd0 00:21:39.490 00:41:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:39.490 00:41:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:39.490 00:41:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:39.490 00:41:12 -- common/autotest_common.sh@855 -- # local i 00:21:39.490 00:41:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:39.490 00:41:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:39.490 00:41:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:39.490 00:41:12 -- common/autotest_common.sh@859 -- # break 00:21:39.490 00:41:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:39.490 00:41:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:39.490 00:41:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:39.490 1+0 records in 00:21:39.490 1+0 records out 00:21:39.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459876 s, 8.9 MB/s 00:21:39.490 00:41:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.490 00:41:13 -- common/autotest_common.sh@872 -- # size=4096 00:21:39.490 00:41:13 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:39.490 00:41:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:39.490 00:41:13 -- common/autotest_common.sh@875 -- # return 0 00:21:39.490 00:41:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:39.490 00:41:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:39.490 00:41:13 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:39.490 00:41:13 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:39.490 00:41:13 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:44.795 63488+0 records in 00:21:44.795 63488+0 records out 00:21:44.795 32505856 bytes (33 MB, 31 MiB) copied, 5.3075 s, 6.1 MB/s 00:21:44.795 00:41:18 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:44.795 00:41:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:44.795 00:41:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:44.795 00:41:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:44.795 00:41:18 -- bdev/nbd_common.sh@51 -- # local i 00:21:44.795 00:41:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:44.795 00:41:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:45.078 [2024-04-27 00:41:18.584411] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@41 -- # break 00:21:45.078 00:41:18 -- bdev/nbd_common.sh@45 -- # return 0 00:21:45.078 00:41:18 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:45.337 [2024-04-27 00:41:18.816086] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.337 00:41:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.595 00:41:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.595 "name": "raid_bdev1", 00:21:45.595 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:45.595 "strip_size_kb": 0, 00:21:45.595 "state": "online", 00:21:45.595 "raid_level": "raid1", 00:21:45.595 "superblock": true, 00:21:45.595 "num_base_bdevs": 2, 00:21:45.595 "num_base_bdevs_discovered": 1, 00:21:45.595 "num_base_bdevs_operational": 1, 00:21:45.595 "base_bdevs_list": [ 00:21:45.595 { 00:21:45.595 "name": null, 00:21:45.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.595 "is_configured": false, 00:21:45.595 "data_offset": 2048, 00:21:45.595 "data_size": 63488 00:21:45.595 }, 00:21:45.595 { 00:21:45.595 "name": "BaseBdev2", 00:21:45.595 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:45.595 "is_configured": true, 00:21:45.595 "data_offset": 2048, 00:21:45.595 "data_size": 63488 00:21:45.595 } 00:21:45.595 ] 00:21:45.595 }' 00:21:45.595 00:41:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.595 00:41:19 -- common/autotest_common.sh@10 -- # set +x 00:21:46.162 00:41:19 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.420 [2024-04-27 00:41:19.916326] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:46.421 [2024-04-27 00:41:19.916398] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.421 [2024-04-27 00:41:19.930681] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:21:46.421 [2024-04-27 00:41:19.932875] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.421 00:41:19 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:47.796 00:41:20 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.796 00:41:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:47.796 00:41:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:47.796 00:41:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:47.796 00:41:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:47.796 00:41:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.796 00:41:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.796 00:41:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:47.796 "name": "raid_bdev1", 00:21:47.796 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:47.796 "strip_size_kb": 0, 00:21:47.796 "state": "online", 00:21:47.796 "raid_level": "raid1", 00:21:47.796 "superblock": true, 00:21:47.796 "num_base_bdevs": 2, 00:21:47.797 "num_base_bdevs_discovered": 2, 00:21:47.797 "num_base_bdevs_operational": 2, 00:21:47.797 "process": { 00:21:47.797 "type": "rebuild", 00:21:47.797 "target": "spare", 00:21:47.797 "progress": { 00:21:47.797 "blocks": 24576, 00:21:47.797 "percent": 38 00:21:47.797 } 00:21:47.797 }, 00:21:47.797 "base_bdevs_list": [ 00:21:47.797 { 00:21:47.797 "name": "spare", 00:21:47.797 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:47.797 "is_configured": true, 00:21:47.797 "data_offset": 2048, 00:21:47.797 "data_size": 63488 00:21:47.797 }, 00:21:47.797 { 00:21:47.797 "name": "BaseBdev2", 00:21:47.797 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:47.797 "is_configured": true, 00:21:47.797 "data_offset": 2048, 00:21:47.797 "data_size": 63488 00:21:47.797 } 00:21:47.797 ] 00:21:47.797 }' 00:21:47.797 00:41:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:47.797 00:41:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:47.797 00:41:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:47.797 00:41:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:47.797 00:41:21 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:48.056 [2024-04-27 00:41:21.482774] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:48.056 [2024-04-27 00:41:21.543365] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:48.056 [2024-04-27 00:41:21.543462] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.056 00:41:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.315 00:41:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.315 "name": "raid_bdev1", 00:21:48.315 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:48.315 "strip_size_kb": 0, 00:21:48.315 "state": "online", 00:21:48.315 "raid_level": "raid1", 00:21:48.315 "superblock": true, 00:21:48.315 "num_base_bdevs": 2, 00:21:48.315 "num_base_bdevs_discovered": 1, 00:21:48.315 "num_base_bdevs_operational": 1, 00:21:48.315 "base_bdevs_list": [ 00:21:48.315 { 00:21:48.315 "name": null, 00:21:48.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.315 "is_configured": false, 00:21:48.315 "data_offset": 2048, 00:21:48.315 "data_size": 63488 00:21:48.315 }, 00:21:48.315 { 00:21:48.315 "name": "BaseBdev2", 00:21:48.315 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:48.315 "is_configured": true, 00:21:48.315 "data_offset": 2048, 00:21:48.315 "data_size": 63488 00:21:48.315 } 00:21:48.315 ] 00:21:48.315 }' 00:21:48.315 00:41:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.315 00:41:21 -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.251 "name": "raid_bdev1", 00:21:49.251 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:49.251 "strip_size_kb": 0, 00:21:49.251 "state": "online", 00:21:49.251 "raid_level": "raid1", 00:21:49.251 "superblock": true, 00:21:49.251 "num_base_bdevs": 2, 00:21:49.251 "num_base_bdevs_discovered": 1, 00:21:49.251 "num_base_bdevs_operational": 1, 00:21:49.251 "base_bdevs_list": [ 00:21:49.251 { 00:21:49.251 "name": null, 00:21:49.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.251 "is_configured": false, 00:21:49.251 "data_offset": 2048, 00:21:49.251 "data_size": 63488 00:21:49.251 }, 00:21:49.251 { 00:21:49.251 "name": "BaseBdev2", 00:21:49.251 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:49.251 "is_configured": true, 00:21:49.251 "data_offset": 2048, 00:21:49.251 "data_size": 63488 00:21:49.251 } 00:21:49.251 ] 00:21:49.251 }' 00:21:49.251 00:41:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.510 00:41:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:49.510 00:41:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.510 00:41:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:49.510 00:41:22 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:49.769 [2024-04-27 00:41:23.115144] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:49.769 [2024-04-27 00:41:23.115226] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.769 [2024-04-27 00:41:23.127662] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:21:49.769 [2024-04-27 00:41:23.129739] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:49.769 00:41:23 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:50.703 00:41:24 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.704 00:41:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.704 00:41:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.704 00:41:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.704 00:41:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.704 00:41:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.704 00:41:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.962 00:41:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.962 "name": "raid_bdev1", 00:21:50.962 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:50.962 "strip_size_kb": 0, 00:21:50.962 "state": "online", 00:21:50.962 "raid_level": "raid1", 00:21:50.962 "superblock": true, 00:21:50.962 "num_base_bdevs": 2, 00:21:50.962 "num_base_bdevs_discovered": 2, 00:21:50.962 "num_base_bdevs_operational": 2, 00:21:50.962 "process": { 00:21:50.963 "type": "rebuild", 00:21:50.963 "target": "spare", 00:21:50.963 "progress": { 00:21:50.963 "blocks": 24576, 00:21:50.963 "percent": 38 00:21:50.963 } 00:21:50.963 }, 00:21:50.963 "base_bdevs_list": [ 00:21:50.963 { 00:21:50.963 "name": "spare", 00:21:50.963 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:50.963 "is_configured": true, 00:21:50.963 "data_offset": 2048, 00:21:50.963 "data_size": 63488 00:21:50.963 }, 00:21:50.963 { 00:21:50.963 "name": "BaseBdev2", 00:21:50.963 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:50.963 "is_configured": true, 00:21:50.963 "data_offset": 2048, 00:21:50.963 "data_size": 63488 00:21:50.963 } 00:21:50.963 ] 00:21:50.963 }' 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:50.963 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@657 -- # local timeout=434 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.963 00:41:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.221 00:41:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:51.221 "name": "raid_bdev1", 00:21:51.221 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:51.221 "strip_size_kb": 0, 00:21:51.221 "state": "online", 00:21:51.221 "raid_level": "raid1", 00:21:51.221 "superblock": true, 00:21:51.221 "num_base_bdevs": 2, 00:21:51.221 "num_base_bdevs_discovered": 2, 00:21:51.221 "num_base_bdevs_operational": 2, 00:21:51.221 "process": { 00:21:51.221 "type": "rebuild", 00:21:51.221 "target": "spare", 00:21:51.221 "progress": { 00:21:51.221 "blocks": 32768, 00:21:51.221 "percent": 51 00:21:51.221 } 00:21:51.221 }, 00:21:51.221 "base_bdevs_list": [ 00:21:51.221 { 00:21:51.221 "name": "spare", 00:21:51.221 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:51.221 "is_configured": true, 00:21:51.221 "data_offset": 2048, 00:21:51.221 "data_size": 63488 00:21:51.221 }, 00:21:51.221 { 00:21:51.221 "name": "BaseBdev2", 00:21:51.221 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:51.221 "is_configured": true, 00:21:51.221 "data_offset": 2048, 00:21:51.221 "data_size": 63488 00:21:51.221 } 00:21:51.221 ] 00:21:51.221 }' 00:21:51.221 00:41:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:51.480 00:41:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.480 00:41:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:51.480 00:41:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.480 00:41:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.416 00:41:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.675 00:41:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.675 "name": "raid_bdev1", 00:21:52.675 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:52.675 "strip_size_kb": 0, 00:21:52.675 "state": "online", 00:21:52.675 "raid_level": "raid1", 00:21:52.675 "superblock": true, 00:21:52.675 "num_base_bdevs": 2, 00:21:52.675 "num_base_bdevs_discovered": 2, 00:21:52.675 "num_base_bdevs_operational": 2, 00:21:52.675 "process": { 00:21:52.675 "type": "rebuild", 00:21:52.675 "target": "spare", 00:21:52.675 "progress": { 00:21:52.675 "blocks": 59392, 00:21:52.675 "percent": 93 00:21:52.675 } 00:21:52.675 }, 00:21:52.675 "base_bdevs_list": [ 00:21:52.675 { 00:21:52.675 "name": "spare", 00:21:52.675 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:52.675 "is_configured": true, 00:21:52.675 "data_offset": 2048, 00:21:52.675 "data_size": 63488 00:21:52.675 }, 00:21:52.675 { 00:21:52.675 "name": "BaseBdev2", 00:21:52.675 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:52.675 "is_configured": true, 00:21:52.675 "data_offset": 2048, 00:21:52.675 "data_size": 63488 00:21:52.675 } 00:21:52.675 ] 00:21:52.675 }' 00:21:52.675 00:41:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.675 00:41:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.675 00:41:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.675 00:41:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.675 00:41:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:52.675 [2024-04-27 00:41:26.249209] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:52.675 [2024-04-27 00:41:26.249326] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:52.675 [2024-04-27 00:41:26.249492] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.054 "name": "raid_bdev1", 00:21:54.054 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:54.054 "strip_size_kb": 0, 00:21:54.054 "state": "online", 00:21:54.054 "raid_level": "raid1", 00:21:54.054 "superblock": true, 00:21:54.054 "num_base_bdevs": 2, 00:21:54.054 "num_base_bdevs_discovered": 2, 00:21:54.054 "num_base_bdevs_operational": 2, 00:21:54.054 "base_bdevs_list": [ 00:21:54.054 { 00:21:54.054 "name": "spare", 00:21:54.054 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:54.054 "is_configured": true, 00:21:54.054 "data_offset": 2048, 00:21:54.054 "data_size": 63488 00:21:54.054 }, 00:21:54.054 { 00:21:54.054 "name": "BaseBdev2", 00:21:54.054 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:54.054 "is_configured": true, 00:21:54.054 "data_offset": 2048, 00:21:54.054 "data_size": 63488 00:21:54.054 } 00:21:54.054 ] 00:21:54.054 }' 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@660 -- # break 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.054 00:41:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.313 00:41:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.313 "name": "raid_bdev1", 00:21:54.313 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:54.313 "strip_size_kb": 0, 00:21:54.313 "state": "online", 00:21:54.313 "raid_level": "raid1", 00:21:54.313 "superblock": true, 00:21:54.313 "num_base_bdevs": 2, 00:21:54.313 "num_base_bdevs_discovered": 2, 00:21:54.313 "num_base_bdevs_operational": 2, 00:21:54.313 "base_bdevs_list": [ 00:21:54.313 { 00:21:54.313 "name": "spare", 00:21:54.313 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:54.313 "is_configured": true, 00:21:54.313 "data_offset": 2048, 00:21:54.313 "data_size": 63488 00:21:54.313 }, 00:21:54.313 { 00:21:54.313 "name": "BaseBdev2", 00:21:54.313 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:54.313 "is_configured": true, 00:21:54.313 "data_offset": 2048, 00:21:54.313 "data_size": 63488 00:21:54.313 } 00:21:54.313 ] 00:21:54.313 }' 00:21:54.313 00:41:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.313 00:41:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:54.313 00:41:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.571 00:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.830 00:41:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.830 "name": "raid_bdev1", 00:21:54.830 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:54.830 "strip_size_kb": 0, 00:21:54.830 "state": "online", 00:21:54.830 "raid_level": "raid1", 00:21:54.830 "superblock": true, 00:21:54.830 "num_base_bdevs": 2, 00:21:54.830 "num_base_bdevs_discovered": 2, 00:21:54.830 "num_base_bdevs_operational": 2, 00:21:54.830 "base_bdevs_list": [ 00:21:54.830 { 00:21:54.830 "name": "spare", 00:21:54.830 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:54.830 "is_configured": true, 00:21:54.830 "data_offset": 2048, 00:21:54.830 "data_size": 63488 00:21:54.830 }, 00:21:54.830 { 00:21:54.830 "name": "BaseBdev2", 00:21:54.830 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:54.830 "is_configured": true, 00:21:54.830 "data_offset": 2048, 00:21:54.830 "data_size": 63488 00:21:54.830 } 00:21:54.830 ] 00:21:54.830 }' 00:21:54.830 00:41:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.830 00:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:55.398 00:41:28 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:55.656 [2024-04-27 00:41:29.050207] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.656 [2024-04-27 00:41:29.050244] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.656 [2024-04-27 00:41:29.050353] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.656 [2024-04-27 00:41:29.050485] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.656 [2024-04-27 00:41:29.050505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:21:55.656 00:41:29 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.656 00:41:29 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:55.915 00:41:29 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:55.915 00:41:29 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:55.915 00:41:29 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@12 -- # local i 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.915 00:41:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:56.173 /dev/nbd0 00:21:56.173 00:41:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.173 00:41:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.173 00:41:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:56.173 00:41:29 -- common/autotest_common.sh@855 -- # local i 00:21:56.173 00:41:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:56.173 00:41:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:56.173 00:41:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:56.173 00:41:29 -- common/autotest_common.sh@859 -- # break 00:21:56.173 00:41:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:56.173 00:41:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:56.173 00:41:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.173 1+0 records in 00:21:56.173 1+0 records out 00:21:56.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039745 s, 10.3 MB/s 00:21:56.174 00:41:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.174 00:41:29 -- common/autotest_common.sh@872 -- # size=4096 00:21:56.174 00:41:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.174 00:41:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:56.174 00:41:29 -- common/autotest_common.sh@875 -- # return 0 00:21:56.174 00:41:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.174 00:41:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.174 00:41:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:56.433 /dev/nbd1 00:21:56.433 00:41:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:56.433 00:41:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:56.433 00:41:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:21:56.433 00:41:29 -- common/autotest_common.sh@855 -- # local i 00:21:56.433 00:41:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:56.433 00:41:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:56.433 00:41:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:21:56.433 00:41:29 -- common/autotest_common.sh@859 -- # break 00:21:56.433 00:41:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:56.433 00:41:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:56.433 00:41:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.433 1+0 records in 00:21:56.433 1+0 records out 00:21:56.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299398 s, 13.7 MB/s 00:21:56.433 00:41:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.433 00:41:29 -- common/autotest_common.sh@872 -- # size=4096 00:21:56.433 00:41:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.433 00:41:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:56.433 00:41:29 -- common/autotest_common.sh@875 -- # return 0 00:21:56.433 00:41:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.433 00:41:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.433 00:41:29 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:56.692 00:41:30 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@51 -- # local i 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@41 -- # break 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.692 00:41:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@41 -- # break 00:21:56.951 00:41:30 -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.951 00:41:30 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:56.951 00:41:30 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:56.951 00:41:30 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:56.951 00:41:30 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:57.209 00:41:30 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:57.467 [2024-04-27 00:41:30.912525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:57.467 [2024-04-27 00:41:30.912654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.467 [2024-04-27 00:41:30.912695] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:57.467 [2024-04-27 00:41:30.912724] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.467 [2024-04-27 00:41:30.915247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.467 [2024-04-27 00:41:30.915331] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:57.467 [2024-04-27 00:41:30.915456] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:57.467 [2024-04-27 00:41:30.915561] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.467 BaseBdev1 00:21:57.467 00:41:30 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:57.468 00:41:30 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:57.468 00:41:30 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:57.726 00:41:31 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:57.985 [2024-04-27 00:41:31.336682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:57.985 [2024-04-27 00:41:31.336783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.985 [2024-04-27 00:41:31.336827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:57.985 [2024-04-27 00:41:31.336860] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.985 [2024-04-27 00:41:31.337378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.985 [2024-04-27 00:41:31.337461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:57.985 [2024-04-27 00:41:31.337589] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:57.985 [2024-04-27 00:41:31.337605] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:57.985 [2024-04-27 00:41:31.337612] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.985 [2024-04-27 00:41:31.337638] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:21:57.985 [2024-04-27 00:41:31.337707] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.985 BaseBdev2 00:21:57.985 00:41:31 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:58.243 00:41:31 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:58.244 [2024-04-27 00:41:31.804778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:58.244 [2024-04-27 00:41:31.804892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.244 [2024-04-27 00:41:31.804935] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:58.244 [2024-04-27 00:41:31.804958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.244 [2024-04-27 00:41:31.805563] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.244 [2024-04-27 00:41:31.805635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:58.244 [2024-04-27 00:41:31.805771] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:58.244 [2024-04-27 00:41:31.805807] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.244 spare 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.244 00:41:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.502 [2024-04-27 00:41:31.905927] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:21:58.502 [2024-04-27 00:41:31.905956] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:58.502 [2024-04-27 00:41:31.906101] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:21:58.502 [2024-04-27 00:41:31.906588] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:21:58.502 [2024-04-27 00:41:31.906614] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:21:58.502 [2024-04-27 00:41:31.906792] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.760 00:41:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.760 "name": "raid_bdev1", 00:21:58.760 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:58.760 "strip_size_kb": 0, 00:21:58.760 "state": "online", 00:21:58.760 "raid_level": "raid1", 00:21:58.760 "superblock": true, 00:21:58.760 "num_base_bdevs": 2, 00:21:58.760 "num_base_bdevs_discovered": 2, 00:21:58.760 "num_base_bdevs_operational": 2, 00:21:58.760 "base_bdevs_list": [ 00:21:58.760 { 00:21:58.760 "name": "spare", 00:21:58.761 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:58.761 "is_configured": true, 00:21:58.761 "data_offset": 2048, 00:21:58.761 "data_size": 63488 00:21:58.761 }, 00:21:58.761 { 00:21:58.761 "name": "BaseBdev2", 00:21:58.761 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:58.761 "is_configured": true, 00:21:58.761 "data_offset": 2048, 00:21:58.761 "data_size": 63488 00:21:58.761 } 00:21:58.761 ] 00:21:58.761 }' 00:21:58.761 00:41:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.761 00:41:32 -- common/autotest_common.sh@10 -- # set +x 00:21:59.328 00:41:32 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:59.328 00:41:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:59.328 00:41:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:59.328 00:41:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:59.328 00:41:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:59.328 00:41:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.328 00:41:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.586 00:41:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:59.586 "name": "raid_bdev1", 00:21:59.586 "uuid": "f8d1db3e-8aea-4d19-b467-875a2d1e9db3", 00:21:59.586 "strip_size_kb": 0, 00:21:59.586 "state": "online", 00:21:59.586 "raid_level": "raid1", 00:21:59.586 "superblock": true, 00:21:59.586 "num_base_bdevs": 2, 00:21:59.586 "num_base_bdevs_discovered": 2, 00:21:59.586 "num_base_bdevs_operational": 2, 00:21:59.586 "base_bdevs_list": [ 00:21:59.586 { 00:21:59.586 "name": "spare", 00:21:59.586 "uuid": "b756322c-7466-599f-8d92-919e79ea5198", 00:21:59.586 "is_configured": true, 00:21:59.586 "data_offset": 2048, 00:21:59.586 "data_size": 63488 00:21:59.586 }, 00:21:59.586 { 00:21:59.586 "name": "BaseBdev2", 00:21:59.586 "uuid": "8b561506-73bf-59ed-b5a2-3ed6237f22cc", 00:21:59.586 "is_configured": true, 00:21:59.586 "data_offset": 2048, 00:21:59.586 "data_size": 63488 00:21:59.586 } 00:21:59.586 ] 00:21:59.586 }' 00:21:59.586 00:41:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:59.586 00:41:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:59.586 00:41:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:59.586 00:41:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:59.586 00:41:33 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.586 00:41:33 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:59.843 00:41:33 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.843 00:41:33 -- bdev/bdev_raid.sh@709 -- # killprocess 130709 00:21:59.844 00:41:33 -- common/autotest_common.sh@936 -- # '[' -z 130709 ']' 00:21:59.844 00:41:33 -- common/autotest_common.sh@940 -- # kill -0 130709 00:21:59.844 00:41:33 -- common/autotest_common.sh@941 -- # uname 00:21:59.844 00:41:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:59.844 00:41:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 130709 00:21:59.844 00:41:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:59.844 00:41:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:59.844 00:41:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 130709' 00:21:59.844 killing process with pid 130709 00:21:59.844 00:41:33 -- common/autotest_common.sh@955 -- # kill 130709 00:21:59.844 Received shutdown signal, test time was about 60.000000 seconds 00:21:59.844 00:21:59.844 Latency(us) 00:21:59.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.844 =================================================================================================================== 00:21:59.844 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.844 [2024-04-27 00:41:33.294572] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:59.844 [2024-04-27 00:41:33.294663] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.844 [2024-04-27 00:41:33.294744] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.844 [2024-04-27 00:41:33.294758] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:21:59.844 00:41:33 -- common/autotest_common.sh@960 -- # wait 130709 00:22:00.101 [2024-04-27 00:41:33.508695] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:01.040 00:22:01.040 real 0m26.003s 00:22:01.040 user 0m37.194s 00:22:01.040 sys 0m4.692s 00:22:01.040 00:41:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:01.040 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:22:01.040 ************************************ 00:22:01.040 END TEST raid_rebuild_test_sb 00:22:01.040 ************************************ 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:22:01.040 00:41:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:01.040 00:41:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.040 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:22:01.040 ************************************ 00:22:01.040 START TEST raid_rebuild_test_io 00:22:01.040 ************************************ 00:22:01.040 00:41:34 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 false true 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=131340 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131340 /var/tmp/spdk-raid.sock 00:22:01.040 00:41:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:01.040 00:41:34 -- common/autotest_common.sh@817 -- # '[' -z 131340 ']' 00:22:01.040 00:41:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:01.040 00:41:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:01.040 00:41:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:01.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:01.040 00:41:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:01.040 00:41:34 -- common/autotest_common.sh@10 -- # set +x 00:22:01.299 [2024-04-27 00:41:34.628882] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:22:01.299 [2024-04-27 00:41:34.629092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131340 ] 00:22:01.299 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:01.299 Zero copy mechanism will not be used. 00:22:01.299 [2024-04-27 00:41:34.788070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.557 [2024-04-27 00:41:34.972714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.815 [2024-04-27 00:41:35.150476] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.073 00:41:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:02.073 00:41:35 -- common/autotest_common.sh@850 -- # return 0 00:22:02.073 00:41:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:02.073 00:41:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:02.073 00:41:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:02.332 BaseBdev1 00:22:02.332 00:41:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:02.332 00:41:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:02.332 00:41:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:02.590 BaseBdev2 00:22:02.590 00:41:36 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:02.849 spare_malloc 00:22:02.849 00:41:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:03.107 spare_delay 00:22:03.107 00:41:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:03.365 [2024-04-27 00:41:36.745527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:03.365 [2024-04-27 00:41:36.745667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.365 [2024-04-27 00:41:36.745711] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:03.365 [2024-04-27 00:41:36.745784] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.365 [2024-04-27 00:41:36.748348] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.365 [2024-04-27 00:41:36.748419] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:03.365 spare 00:22:03.365 00:41:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:03.624 [2024-04-27 00:41:36.965631] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.624 [2024-04-27 00:41:36.967793] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.624 [2024-04-27 00:41:36.967905] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:22:03.624 [2024-04-27 00:41:36.967919] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:03.624 [2024-04-27 00:41:36.968093] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:03.624 [2024-04-27 00:41:36.968501] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:22:03.624 [2024-04-27 00:41:36.968526] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:22:03.624 [2024-04-27 00:41:36.968727] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.624 00:41:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.624 00:41:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.624 "name": "raid_bdev1", 00:22:03.624 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:03.624 "strip_size_kb": 0, 00:22:03.624 "state": "online", 00:22:03.624 "raid_level": "raid1", 00:22:03.624 "superblock": false, 00:22:03.624 "num_base_bdevs": 2, 00:22:03.624 "num_base_bdevs_discovered": 2, 00:22:03.624 "num_base_bdevs_operational": 2, 00:22:03.624 "base_bdevs_list": [ 00:22:03.624 { 00:22:03.624 "name": "BaseBdev1", 00:22:03.624 "uuid": "cf8a651b-ddd3-4047-b66b-e05d60ed0c9f", 00:22:03.624 "is_configured": true, 00:22:03.624 "data_offset": 0, 00:22:03.624 "data_size": 65536 00:22:03.624 }, 00:22:03.624 { 00:22:03.624 "name": "BaseBdev2", 00:22:03.624 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:03.624 "is_configured": true, 00:22:03.624 "data_offset": 0, 00:22:03.624 "data_size": 65536 00:22:03.624 } 00:22:03.624 ] 00:22:03.624 }' 00:22:03.624 00:41:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.624 00:41:37 -- common/autotest_common.sh@10 -- # set +x 00:22:04.560 00:41:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:04.560 00:41:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:04.560 [2024-04-27 00:41:38.086063] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.560 00:41:38 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:04.560 00:41:38 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.560 00:41:38 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:04.818 00:41:38 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:04.818 00:41:38 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:04.818 00:41:38 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:04.818 00:41:38 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:05.076 [2024-04-27 00:41:38.433324] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:05.076 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:05.076 Zero copy mechanism will not be used. 00:22:05.076 Running I/O for 60 seconds... 00:22:05.076 [2024-04-27 00:41:38.532070] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:05.076 [2024-04-27 00:41:38.544758] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:05.076 00:41:38 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:05.076 00:41:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.076 00:41:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.076 00:41:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.076 00:41:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.077 00:41:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:05.077 00:41:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.077 00:41:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.077 00:41:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.077 00:41:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.077 00:41:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.077 00:41:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.336 00:41:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.336 "name": "raid_bdev1", 00:22:05.336 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:05.336 "strip_size_kb": 0, 00:22:05.336 "state": "online", 00:22:05.336 "raid_level": "raid1", 00:22:05.336 "superblock": false, 00:22:05.336 "num_base_bdevs": 2, 00:22:05.336 "num_base_bdevs_discovered": 1, 00:22:05.336 "num_base_bdevs_operational": 1, 00:22:05.336 "base_bdevs_list": [ 00:22:05.336 { 00:22:05.336 "name": null, 00:22:05.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.336 "is_configured": false, 00:22:05.336 "data_offset": 0, 00:22:05.336 "data_size": 65536 00:22:05.336 }, 00:22:05.336 { 00:22:05.336 "name": "BaseBdev2", 00:22:05.336 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:05.336 "is_configured": true, 00:22:05.336 "data_offset": 0, 00:22:05.336 "data_size": 65536 00:22:05.336 } 00:22:05.336 ] 00:22:05.336 }' 00:22:05.336 00:41:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.336 00:41:38 -- common/autotest_common.sh@10 -- # set +x 00:22:05.902 00:41:39 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:06.161 [2024-04-27 00:41:39.712518] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:06.161 [2024-04-27 00:41:39.712593] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:06.420 [2024-04-27 00:41:39.753286] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:06.420 [2024-04-27 00:41:39.755554] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:06.420 00:41:39 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:06.420 [2024-04-27 00:41:39.886248] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:06.420 [2024-04-27 00:41:39.886924] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:06.678 [2024-04-27 00:41:40.029274] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:06.678 [2024-04-27 00:41:40.029560] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:06.937 [2024-04-27 00:41:40.418425] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:06.937 [2024-04-27 00:41:40.418905] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:07.195 [2024-04-27 00:41:40.551142] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:07.195 00:41:40 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:07.195 00:41:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:07.195 00:41:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:07.195 00:41:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:07.195 00:41:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:07.195 00:41:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.195 00:41:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.454 [2024-04-27 00:41:40.789543] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:07.454 00:41:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:07.454 "name": "raid_bdev1", 00:22:07.454 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:07.454 "strip_size_kb": 0, 00:22:07.454 "state": "online", 00:22:07.454 "raid_level": "raid1", 00:22:07.454 "superblock": false, 00:22:07.454 "num_base_bdevs": 2, 00:22:07.454 "num_base_bdevs_discovered": 2, 00:22:07.454 "num_base_bdevs_operational": 2, 00:22:07.454 "process": { 00:22:07.454 "type": "rebuild", 00:22:07.454 "target": "spare", 00:22:07.454 "progress": { 00:22:07.454 "blocks": 14336, 00:22:07.454 "percent": 21 00:22:07.454 } 00:22:07.454 }, 00:22:07.454 "base_bdevs_list": [ 00:22:07.454 { 00:22:07.454 "name": "spare", 00:22:07.454 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:07.454 "is_configured": true, 00:22:07.454 "data_offset": 0, 00:22:07.454 "data_size": 65536 00:22:07.454 }, 00:22:07.454 { 00:22:07.454 "name": "BaseBdev2", 00:22:07.454 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:07.454 "is_configured": true, 00:22:07.454 "data_offset": 0, 00:22:07.454 "data_size": 65536 00:22:07.454 } 00:22:07.454 ] 00:22:07.454 }' 00:22:07.454 00:41:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:07.454 [2024-04-27 00:41:41.013899] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:07.454 [2024-04-27 00:41:41.014304] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:07.454 00:41:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:07.454 00:41:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:07.713 00:41:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:07.713 00:41:41 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:07.973 [2024-04-27 00:41:41.312954] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:07.973 [2024-04-27 00:41:41.366596] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:07.973 [2024-04-27 00:41:41.401366] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:07.973 [2024-04-27 00:41:41.414787] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:07.973 [2024-04-27 00:41:41.452076] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.973 00:41:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.974 00:41:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.234 00:41:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.234 "name": "raid_bdev1", 00:22:08.234 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:08.234 "strip_size_kb": 0, 00:22:08.234 "state": "online", 00:22:08.234 "raid_level": "raid1", 00:22:08.234 "superblock": false, 00:22:08.234 "num_base_bdevs": 2, 00:22:08.234 "num_base_bdevs_discovered": 1, 00:22:08.234 "num_base_bdevs_operational": 1, 00:22:08.234 "base_bdevs_list": [ 00:22:08.234 { 00:22:08.234 "name": null, 00:22:08.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.234 "is_configured": false, 00:22:08.234 "data_offset": 0, 00:22:08.234 "data_size": 65536 00:22:08.234 }, 00:22:08.234 { 00:22:08.234 "name": "BaseBdev2", 00:22:08.234 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:08.234 "is_configured": true, 00:22:08.234 "data_offset": 0, 00:22:08.234 "data_size": 65536 00:22:08.234 } 00:22:08.234 ] 00:22:08.234 }' 00:22:08.234 00:41:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.234 00:41:41 -- common/autotest_common.sh@10 -- # set +x 00:22:08.802 00:41:42 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:08.802 00:41:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:08.802 00:41:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:08.802 00:41:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:08.802 00:41:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:08.802 00:41:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.802 00:41:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.061 00:41:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.062 "name": "raid_bdev1", 00:22:09.062 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:09.062 "strip_size_kb": 0, 00:22:09.062 "state": "online", 00:22:09.062 "raid_level": "raid1", 00:22:09.062 "superblock": false, 00:22:09.062 "num_base_bdevs": 2, 00:22:09.062 "num_base_bdevs_discovered": 1, 00:22:09.062 "num_base_bdevs_operational": 1, 00:22:09.062 "base_bdevs_list": [ 00:22:09.062 { 00:22:09.062 "name": null, 00:22:09.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.062 "is_configured": false, 00:22:09.062 "data_offset": 0, 00:22:09.062 "data_size": 65536 00:22:09.062 }, 00:22:09.062 { 00:22:09.062 "name": "BaseBdev2", 00:22:09.062 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:09.062 "is_configured": true, 00:22:09.062 "data_offset": 0, 00:22:09.062 "data_size": 65536 00:22:09.062 } 00:22:09.062 ] 00:22:09.062 }' 00:22:09.062 00:41:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.062 00:41:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:09.062 00:41:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.320 00:41:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:09.320 00:41:42 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:09.320 [2024-04-27 00:41:42.883667] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:09.320 [2024-04-27 00:41:42.883735] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:09.579 [2024-04-27 00:41:42.926791] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:09.579 [2024-04-27 00:41:42.928826] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:09.579 00:41:42 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:09.579 [2024-04-27 00:41:43.058675] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:09.579 [2024-04-27 00:41:43.059298] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:09.836 [2024-04-27 00:41:43.268540] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:09.836 [2024-04-27 00:41:43.268868] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:10.095 [2024-04-27 00:41:43.597352] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:10.353 [2024-04-27 00:41:43.707500] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:10.353 [2024-04-27 00:41:43.923643] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:10.353 00:41:43 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.353 00:41:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.353 00:41:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.353 00:41:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.353 00:41:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.611 00:41:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.611 00:41:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.611 [2024-04-27 00:41:44.131984] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:10.611 [2024-04-27 00:41:44.132302] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:10.611 00:41:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:10.611 "name": "raid_bdev1", 00:22:10.611 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:10.611 "strip_size_kb": 0, 00:22:10.611 "state": "online", 00:22:10.611 "raid_level": "raid1", 00:22:10.611 "superblock": false, 00:22:10.611 "num_base_bdevs": 2, 00:22:10.611 "num_base_bdevs_discovered": 2, 00:22:10.611 "num_base_bdevs_operational": 2, 00:22:10.611 "process": { 00:22:10.611 "type": "rebuild", 00:22:10.611 "target": "spare", 00:22:10.611 "progress": { 00:22:10.611 "blocks": 16384, 00:22:10.611 "percent": 25 00:22:10.611 } 00:22:10.611 }, 00:22:10.611 "base_bdevs_list": [ 00:22:10.611 { 00:22:10.611 "name": "spare", 00:22:10.611 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:10.611 "is_configured": true, 00:22:10.611 "data_offset": 0, 00:22:10.611 "data_size": 65536 00:22:10.611 }, 00:22:10.611 { 00:22:10.611 "name": "BaseBdev2", 00:22:10.611 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:10.611 "is_configured": true, 00:22:10.611 "data_offset": 0, 00:22:10.611 "data_size": 65536 00:22:10.611 } 00:22:10.611 ] 00:22:10.611 }' 00:22:10.612 00:41:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@657 -- # local timeout=454 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.870 00:41:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.129 00:41:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.129 "name": "raid_bdev1", 00:22:11.129 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:11.129 "strip_size_kb": 0, 00:22:11.129 "state": "online", 00:22:11.129 "raid_level": "raid1", 00:22:11.129 "superblock": false, 00:22:11.129 "num_base_bdevs": 2, 00:22:11.129 "num_base_bdevs_discovered": 2, 00:22:11.129 "num_base_bdevs_operational": 2, 00:22:11.129 "process": { 00:22:11.129 "type": "rebuild", 00:22:11.129 "target": "spare", 00:22:11.129 "progress": { 00:22:11.129 "blocks": 20480, 00:22:11.129 "percent": 31 00:22:11.129 } 00:22:11.129 }, 00:22:11.129 "base_bdevs_list": [ 00:22:11.129 { 00:22:11.129 "name": "spare", 00:22:11.129 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:11.129 "is_configured": true, 00:22:11.129 "data_offset": 0, 00:22:11.129 "data_size": 65536 00:22:11.129 }, 00:22:11.129 { 00:22:11.129 "name": "BaseBdev2", 00:22:11.129 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:11.129 "is_configured": true, 00:22:11.129 "data_offset": 0, 00:22:11.129 "data_size": 65536 00:22:11.129 } 00:22:11.129 ] 00:22:11.129 }' 00:22:11.129 00:41:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:11.129 00:41:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:11.129 00:41:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:11.129 00:41:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:11.129 00:41:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:11.387 [2024-04-27 00:41:44.788857] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:11.387 [2024-04-27 00:41:44.917431] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:11.956 [2024-04-27 00:41:45.252485] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:11.956 [2024-04-27 00:41:45.252708] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:11.956 [2024-04-27 00:41:45.468572] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:12.214 00:41:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:12.214 00:41:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.214 00:41:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:12.214 00:41:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:12.214 00:41:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:12.214 00:41:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:12.215 00:41:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.215 00:41:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.215 [2024-04-27 00:41:45.689680] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:12.473 00:41:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:12.473 "name": "raid_bdev1", 00:22:12.473 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:12.473 "strip_size_kb": 0, 00:22:12.473 "state": "online", 00:22:12.473 "raid_level": "raid1", 00:22:12.473 "superblock": false, 00:22:12.473 "num_base_bdevs": 2, 00:22:12.473 "num_base_bdevs_discovered": 2, 00:22:12.473 "num_base_bdevs_operational": 2, 00:22:12.473 "process": { 00:22:12.473 "type": "rebuild", 00:22:12.473 "target": "spare", 00:22:12.473 "progress": { 00:22:12.473 "blocks": 40960, 00:22:12.473 "percent": 62 00:22:12.473 } 00:22:12.473 }, 00:22:12.473 "base_bdevs_list": [ 00:22:12.473 { 00:22:12.473 "name": "spare", 00:22:12.473 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:12.473 "is_configured": true, 00:22:12.473 "data_offset": 0, 00:22:12.473 "data_size": 65536 00:22:12.473 }, 00:22:12.473 { 00:22:12.473 "name": "BaseBdev2", 00:22:12.473 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:12.473 "is_configured": true, 00:22:12.473 "data_offset": 0, 00:22:12.473 "data_size": 65536 00:22:12.473 } 00:22:12.473 ] 00:22:12.473 }' 00:22:12.473 00:41:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.473 00:41:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.473 00:41:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:12.473 00:41:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.473 00:41:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:12.732 [2024-04-27 00:41:46.119153] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:12.992 [2024-04-27 00:41:46.448615] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:13.251 [2024-04-27 00:41:46.656634] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:13.510 00:41:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:13.510 00:41:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.510 00:41:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.510 00:41:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:13.510 00:41:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:13.510 00:41:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.510 00:41:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.510 00:41:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.769 [2024-04-27 00:41:47.312801] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:13.769 00:41:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.769 "name": "raid_bdev1", 00:22:13.769 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:13.769 "strip_size_kb": 0, 00:22:13.769 "state": "online", 00:22:13.769 "raid_level": "raid1", 00:22:13.769 "superblock": false, 00:22:13.769 "num_base_bdevs": 2, 00:22:13.769 "num_base_bdevs_discovered": 2, 00:22:13.769 "num_base_bdevs_operational": 2, 00:22:13.769 "process": { 00:22:13.769 "type": "rebuild", 00:22:13.769 "target": "spare", 00:22:13.769 "progress": { 00:22:13.769 "blocks": 63488, 00:22:13.769 "percent": 96 00:22:13.769 } 00:22:13.769 }, 00:22:13.769 "base_bdevs_list": [ 00:22:13.769 { 00:22:13.769 "name": "spare", 00:22:13.769 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:13.769 "is_configured": true, 00:22:13.769 "data_offset": 0, 00:22:13.769 "data_size": 65536 00:22:13.769 }, 00:22:13.769 { 00:22:13.769 "name": "BaseBdev2", 00:22:13.769 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:13.769 "is_configured": true, 00:22:13.769 "data_offset": 0, 00:22:13.769 "data_size": 65536 00:22:13.769 } 00:22:13.769 ] 00:22:13.769 }' 00:22:13.769 00:41:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.028 00:41:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.028 00:41:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.028 00:41:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.028 00:41:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:14.028 [2024-04-27 00:41:47.419051] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:14.028 [2024-04-27 00:41:47.420526] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.964 00:41:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.223 "name": "raid_bdev1", 00:22:15.223 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:15.223 "strip_size_kb": 0, 00:22:15.223 "state": "online", 00:22:15.223 "raid_level": "raid1", 00:22:15.223 "superblock": false, 00:22:15.223 "num_base_bdevs": 2, 00:22:15.223 "num_base_bdevs_discovered": 2, 00:22:15.223 "num_base_bdevs_operational": 2, 00:22:15.223 "base_bdevs_list": [ 00:22:15.223 { 00:22:15.223 "name": "spare", 00:22:15.223 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:15.223 "is_configured": true, 00:22:15.223 "data_offset": 0, 00:22:15.223 "data_size": 65536 00:22:15.223 }, 00:22:15.223 { 00:22:15.223 "name": "BaseBdev2", 00:22:15.223 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:15.223 "is_configured": true, 00:22:15.223 "data_offset": 0, 00:22:15.223 "data_size": 65536 00:22:15.223 } 00:22:15.223 ] 00:22:15.223 }' 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@660 -- # break 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:15.223 00:41:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:15.224 00:41:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.224 00:41:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.224 00:41:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.483 00:41:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.483 "name": "raid_bdev1", 00:22:15.483 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:15.483 "strip_size_kb": 0, 00:22:15.483 "state": "online", 00:22:15.483 "raid_level": "raid1", 00:22:15.483 "superblock": false, 00:22:15.483 "num_base_bdevs": 2, 00:22:15.483 "num_base_bdevs_discovered": 2, 00:22:15.483 "num_base_bdevs_operational": 2, 00:22:15.483 "base_bdevs_list": [ 00:22:15.483 { 00:22:15.483 "name": "spare", 00:22:15.483 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:15.483 "is_configured": true, 00:22:15.483 "data_offset": 0, 00:22:15.483 "data_size": 65536 00:22:15.483 }, 00:22:15.483 { 00:22:15.483 "name": "BaseBdev2", 00:22:15.483 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:15.483 "is_configured": true, 00:22:15.483 "data_offset": 0, 00:22:15.483 "data_size": 65536 00:22:15.483 } 00:22:15.483 ] 00:22:15.483 }' 00:22:15.483 00:41:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.742 00:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.001 00:41:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:16.001 "name": "raid_bdev1", 00:22:16.001 "uuid": "b7939567-5a4b-4816-b474-cc17831a2f90", 00:22:16.001 "strip_size_kb": 0, 00:22:16.001 "state": "online", 00:22:16.001 "raid_level": "raid1", 00:22:16.001 "superblock": false, 00:22:16.001 "num_base_bdevs": 2, 00:22:16.001 "num_base_bdevs_discovered": 2, 00:22:16.001 "num_base_bdevs_operational": 2, 00:22:16.001 "base_bdevs_list": [ 00:22:16.001 { 00:22:16.001 "name": "spare", 00:22:16.001 "uuid": "65633c24-ec4f-5643-b1f9-b5adc9898512", 00:22:16.001 "is_configured": true, 00:22:16.001 "data_offset": 0, 00:22:16.001 "data_size": 65536 00:22:16.001 }, 00:22:16.001 { 00:22:16.001 "name": "BaseBdev2", 00:22:16.001 "uuid": "824bfc5d-d0fc-4f09-aeae-3acbe9f18abf", 00:22:16.001 "is_configured": true, 00:22:16.001 "data_offset": 0, 00:22:16.001 "data_size": 65536 00:22:16.001 } 00:22:16.001 ] 00:22:16.001 }' 00:22:16.001 00:41:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:16.001 00:41:49 -- common/autotest_common.sh@10 -- # set +x 00:22:16.617 00:41:50 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:16.617 [2024-04-27 00:41:50.203829] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.617 [2024-04-27 00:41:50.203887] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.876 00:22:16.876 Latency(us) 00:22:16.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.876 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:16.876 raid_bdev1 : 11.81 113.78 341.35 0.00 0.00 11984.02 269.96 109147.23 00:22:16.876 =================================================================================================================== 00:22:16.876 Total : 113.78 341.35 0.00 0.00 11984.02 269.96 109147.23 00:22:16.876 [2024-04-27 00:41:50.262898] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.876 [2024-04-27 00:41:50.262953] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.876 [2024-04-27 00:41:50.263034] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.876 [2024-04-27 00:41:50.263058] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:22:16.876 0 00:22:16.876 00:41:50 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.876 00:41:50 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:17.145 00:41:50 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:17.145 00:41:50 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:17.145 00:41:50 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@12 -- # local i 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.145 00:41:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:17.403 /dev/nbd0 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:17.403 00:41:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:17.403 00:41:50 -- common/autotest_common.sh@855 -- # local i 00:22:17.403 00:41:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:17.403 00:41:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:17.403 00:41:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:17.403 00:41:50 -- common/autotest_common.sh@859 -- # break 00:22:17.403 00:41:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:17.403 00:41:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:17.403 00:41:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.403 1+0 records in 00:22:17.403 1+0 records out 00:22:17.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306821 s, 13.3 MB/s 00:22:17.403 00:41:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.403 00:41:50 -- common/autotest_common.sh@872 -- # size=4096 00:22:17.403 00:41:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.403 00:41:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:17.403 00:41:50 -- common/autotest_common.sh@875 -- # return 0 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.403 00:41:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:17.403 00:41:50 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:22:17.403 00:41:50 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@12 -- # local i 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.403 00:41:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:17.661 /dev/nbd1 00:22:17.661 00:41:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:17.661 00:41:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:17.661 00:41:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:17.661 00:41:51 -- common/autotest_common.sh@855 -- # local i 00:22:17.661 00:41:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:17.661 00:41:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:17.661 00:41:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:17.661 00:41:51 -- common/autotest_common.sh@859 -- # break 00:22:17.661 00:41:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:17.661 00:41:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:17.661 00:41:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.661 1+0 records in 00:22:17.661 1+0 records out 00:22:17.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334568 s, 12.2 MB/s 00:22:17.661 00:41:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.661 00:41:51 -- common/autotest_common.sh@872 -- # size=4096 00:22:17.661 00:41:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.661 00:41:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:17.661 00:41:51 -- common/autotest_common.sh@875 -- # return 0 00:22:17.661 00:41:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.661 00:41:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.661 00:41:51 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:17.919 00:41:51 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:17.919 00:41:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.919 00:41:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:17.919 00:41:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.919 00:41:51 -- bdev/nbd_common.sh@51 -- # local i 00:22:17.919 00:41:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.919 00:41:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@41 -- # break 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.178 00:41:51 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@51 -- # local i 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.178 00:41:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@41 -- # break 00:22:18.437 00:41:51 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.437 00:41:51 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:18.437 00:41:51 -- bdev/bdev_raid.sh@709 -- # killprocess 131340 00:22:18.437 00:41:51 -- common/autotest_common.sh@936 -- # '[' -z 131340 ']' 00:22:18.437 00:41:51 -- common/autotest_common.sh@940 -- # kill -0 131340 00:22:18.437 00:41:51 -- common/autotest_common.sh@941 -- # uname 00:22:18.437 00:41:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.437 00:41:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131340 00:22:18.437 00:41:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:18.437 00:41:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:18.437 00:41:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131340' 00:22:18.437 killing process with pid 131340 00:22:18.437 00:41:51 -- common/autotest_common.sh@955 -- # kill 131340 00:22:18.437 Received shutdown signal, test time was about 13.416040 seconds 00:22:18.437 00:22:18.437 Latency(us) 00:22:18.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.437 =================================================================================================================== 00:22:18.437 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.437 [2024-04-27 00:41:51.851668] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:18.437 00:41:51 -- common/autotest_common.sh@960 -- # wait 131340 00:22:18.695 [2024-04-27 00:41:52.026689] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.629 ************************************ 00:22:19.629 END TEST raid_rebuild_test_io 00:22:19.629 ************************************ 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:19.629 00:22:19.629 real 0m18.519s 00:22:19.629 user 0m28.292s 00:22:19.629 sys 0m1.973s 00:22:19.629 00:41:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:19.629 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:22:19.629 00:41:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:19.629 00:41:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:19.629 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.629 ************************************ 00:22:19.629 START TEST raid_rebuild_test_sb_io 00:22:19.629 ************************************ 00:22:19.629 00:41:53 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 2 true true 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@544 -- # raid_pid=131840 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131840 /var/tmp/spdk-raid.sock 00:22:19.629 00:41:53 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.629 00:41:53 -- common/autotest_common.sh@817 -- # '[' -z 131840 ']' 00:22:19.629 00:41:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:19.629 00:41:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:19.629 00:41:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:19.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:19.629 00:41:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:19.629 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:22:19.888 [2024-04-27 00:41:53.225925] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:22:19.888 [2024-04-27 00:41:53.226275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131840 ] 00:22:19.888 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.888 Zero copy mechanism will not be used. 00:22:19.888 [2024-04-27 00:41:53.395306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.146 [2024-04-27 00:41:53.575710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.405 [2024-04-27 00:41:53.742705] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.665 00:41:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:20.665 00:41:54 -- common/autotest_common.sh@850 -- # return 0 00:22:20.665 00:41:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.665 00:41:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.665 00:41:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:20.924 BaseBdev1_malloc 00:22:20.924 00:41:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:21.183 [2024-04-27 00:41:54.705846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:21.183 [2024-04-27 00:41:54.705981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.183 [2024-04-27 00:41:54.706019] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:21.183 [2024-04-27 00:41:54.706065] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.183 [2024-04-27 00:41:54.708880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.183 [2024-04-27 00:41:54.708954] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:21.183 BaseBdev1 00:22:21.183 00:41:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:21.183 00:41:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:21.183 00:41:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:21.442 BaseBdev2_malloc 00:22:21.442 00:41:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:21.701 [2024-04-27 00:41:55.175321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:21.701 [2024-04-27 00:41:55.175418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.701 [2024-04-27 00:41:55.175464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:21.701 [2024-04-27 00:41:55.175516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.701 [2024-04-27 00:41:55.177788] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.701 [2024-04-27 00:41:55.177835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:21.701 BaseBdev2 00:22:21.701 00:41:55 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:21.960 spare_malloc 00:22:21.960 00:41:55 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:22.219 spare_delay 00:22:22.219 00:41:55 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:22.479 [2024-04-27 00:41:55.836658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.479 [2024-04-27 00:41:55.836775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.479 [2024-04-27 00:41:55.836821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:22.479 [2024-04-27 00:41:55.836896] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.479 [2024-04-27 00:41:55.839394] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.479 [2024-04-27 00:41:55.839464] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.479 spare 00:22:22.479 00:41:55 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:22.479 [2024-04-27 00:41:56.040762] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.479 [2024-04-27 00:41:56.043055] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.479 [2024-04-27 00:41:56.043349] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:22:22.479 [2024-04-27 00:41:56.043373] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:22.479 [2024-04-27 00:41:56.043521] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:22.479 [2024-04-27 00:41:56.043921] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:22:22.479 [2024-04-27 00:41:56.043974] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:22:22.479 [2024-04-27 00:41:56.044148] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.479 00:41:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.738 00:41:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.738 00:41:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.738 00:41:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.738 00:41:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.738 "name": "raid_bdev1", 00:22:22.738 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:22.738 "strip_size_kb": 0, 00:22:22.738 "state": "online", 00:22:22.738 "raid_level": "raid1", 00:22:22.738 "superblock": true, 00:22:22.738 "num_base_bdevs": 2, 00:22:22.738 "num_base_bdevs_discovered": 2, 00:22:22.738 "num_base_bdevs_operational": 2, 00:22:22.738 "base_bdevs_list": [ 00:22:22.738 { 00:22:22.738 "name": "BaseBdev1", 00:22:22.738 "uuid": "8edf3a92-41d9-51cf-ad1b-4d4581b787b1", 00:22:22.738 "is_configured": true, 00:22:22.738 "data_offset": 2048, 00:22:22.738 "data_size": 63488 00:22:22.738 }, 00:22:22.738 { 00:22:22.738 "name": "BaseBdev2", 00:22:22.738 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:22.738 "is_configured": true, 00:22:22.738 "data_offset": 2048, 00:22:22.738 "data_size": 63488 00:22:22.738 } 00:22:22.738 ] 00:22:22.738 }' 00:22:22.738 00:41:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.738 00:41:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.674 00:41:56 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:23.674 00:41:56 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:23.674 [2024-04-27 00:41:57.169218] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.674 00:41:57 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:23.674 00:41:57 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.674 00:41:57 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:23.933 00:41:57 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:23.933 00:41:57 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:23.933 00:41:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:23.933 00:41:57 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:23.933 [2024-04-27 00:41:57.512068] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:23.933 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:23.933 Zero copy mechanism will not be used. 00:22:23.933 Running I/O for 60 seconds... 00:22:24.192 [2024-04-27 00:41:57.593923] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:24.192 [2024-04-27 00:41:57.600159] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.192 00:41:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.450 00:41:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.450 "name": "raid_bdev1", 00:22:24.450 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:24.450 "strip_size_kb": 0, 00:22:24.450 "state": "online", 00:22:24.450 "raid_level": "raid1", 00:22:24.450 "superblock": true, 00:22:24.450 "num_base_bdevs": 2, 00:22:24.450 "num_base_bdevs_discovered": 1, 00:22:24.450 "num_base_bdevs_operational": 1, 00:22:24.450 "base_bdevs_list": [ 00:22:24.450 { 00:22:24.450 "name": null, 00:22:24.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.450 "is_configured": false, 00:22:24.450 "data_offset": 2048, 00:22:24.450 "data_size": 63488 00:22:24.450 }, 00:22:24.450 { 00:22:24.450 "name": "BaseBdev2", 00:22:24.450 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:24.450 "is_configured": true, 00:22:24.450 "data_offset": 2048, 00:22:24.450 "data_size": 63488 00:22:24.450 } 00:22:24.450 ] 00:22:24.450 }' 00:22:24.450 00:41:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.450 00:41:57 -- common/autotest_common.sh@10 -- # set +x 00:22:25.025 00:41:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:25.283 [2024-04-27 00:41:58.745742] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:25.283 [2024-04-27 00:41:58.745819] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.283 00:41:58 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:25.283 [2024-04-27 00:41:58.787485] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:25.283 [2024-04-27 00:41:58.789375] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.540 [2024-04-27 00:41:58.904344] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:25.540 [2024-04-27 00:41:58.904724] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:25.541 [2024-04-27 00:41:59.112501] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:25.541 [2024-04-27 00:41:59.112740] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:26.107 [2024-04-27 00:41:59.432603] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:26.107 [2024-04-27 00:41:59.433058] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:26.107 [2024-04-27 00:41:59.546589] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:26.107 [2024-04-27 00:41:59.546840] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:26.365 00:41:59 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.365 00:41:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.365 00:41:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.365 00:41:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.365 00:41:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.366 00:41:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.366 00:41:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.366 [2024-04-27 00:41:59.891696] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:26.366 [2024-04-27 00:41:59.892271] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:26.624 00:42:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.624 "name": "raid_bdev1", 00:22:26.624 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:26.624 "strip_size_kb": 0, 00:22:26.624 "state": "online", 00:22:26.624 "raid_level": "raid1", 00:22:26.624 "superblock": true, 00:22:26.624 "num_base_bdevs": 2, 00:22:26.624 "num_base_bdevs_discovered": 2, 00:22:26.624 "num_base_bdevs_operational": 2, 00:22:26.624 "process": { 00:22:26.624 "type": "rebuild", 00:22:26.624 "target": "spare", 00:22:26.624 "progress": { 00:22:26.624 "blocks": 14336, 00:22:26.624 "percent": 22 00:22:26.624 } 00:22:26.624 }, 00:22:26.624 "base_bdevs_list": [ 00:22:26.624 { 00:22:26.624 "name": "spare", 00:22:26.624 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:26.624 "is_configured": true, 00:22:26.624 "data_offset": 2048, 00:22:26.624 "data_size": 63488 00:22:26.624 }, 00:22:26.624 { 00:22:26.624 "name": "BaseBdev2", 00:22:26.624 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:26.624 "is_configured": true, 00:22:26.624 "data_offset": 2048, 00:22:26.624 "data_size": 63488 00:22:26.624 } 00:22:26.624 ] 00:22:26.624 }' 00:22:26.624 00:42:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.624 00:42:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.624 00:42:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.624 [2024-04-27 00:42:00.120586] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:26.624 00:42:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.624 00:42:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:26.883 [2024-04-27 00:42:00.396626] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:27.141 [2024-04-27 00:42:00.569059] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:27.141 [2024-04-27 00:42:00.577569] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.141 [2024-04-27 00:42:00.619709] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.141 00:42:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.399 00:42:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:27.399 "name": "raid_bdev1", 00:22:27.399 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:27.399 "strip_size_kb": 0, 00:22:27.399 "state": "online", 00:22:27.399 "raid_level": "raid1", 00:22:27.399 "superblock": true, 00:22:27.399 "num_base_bdevs": 2, 00:22:27.399 "num_base_bdevs_discovered": 1, 00:22:27.399 "num_base_bdevs_operational": 1, 00:22:27.399 "base_bdevs_list": [ 00:22:27.399 { 00:22:27.399 "name": null, 00:22:27.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.399 "is_configured": false, 00:22:27.399 "data_offset": 2048, 00:22:27.399 "data_size": 63488 00:22:27.399 }, 00:22:27.399 { 00:22:27.399 "name": "BaseBdev2", 00:22:27.399 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:27.399 "is_configured": true, 00:22:27.399 "data_offset": 2048, 00:22:27.399 "data_size": 63488 00:22:27.399 } 00:22:27.399 ] 00:22:27.399 }' 00:22:27.399 00:42:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:27.399 00:42:00 -- common/autotest_common.sh@10 -- # set +x 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.333 "name": "raid_bdev1", 00:22:28.333 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:28.333 "strip_size_kb": 0, 00:22:28.333 "state": "online", 00:22:28.333 "raid_level": "raid1", 00:22:28.333 "superblock": true, 00:22:28.333 "num_base_bdevs": 2, 00:22:28.333 "num_base_bdevs_discovered": 1, 00:22:28.333 "num_base_bdevs_operational": 1, 00:22:28.333 "base_bdevs_list": [ 00:22:28.333 { 00:22:28.333 "name": null, 00:22:28.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.333 "is_configured": false, 00:22:28.333 "data_offset": 2048, 00:22:28.333 "data_size": 63488 00:22:28.333 }, 00:22:28.333 { 00:22:28.333 "name": "BaseBdev2", 00:22:28.333 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:28.333 "is_configured": true, 00:22:28.333 "data_offset": 2048, 00:22:28.333 "data_size": 63488 00:22:28.333 } 00:22:28.333 ] 00:22:28.333 }' 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:28.333 00:42:01 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:28.592 [2024-04-27 00:42:02.092029] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:28.592 [2024-04-27 00:42:02.092076] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:28.592 [2024-04-27 00:42:02.142225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:28.592 [2024-04-27 00:42:02.144511] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:28.592 00:42:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:28.850 [2024-04-27 00:42:02.430535] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:29.416 [2024-04-27 00:42:02.919364] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:29.416 [2024-04-27 00:42:02.919804] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:29.675 00:42:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.675 00:42:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.675 00:42:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.675 00:42:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.675 00:42:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.675 00:42:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.675 00:42:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.933 [2024-04-27 00:42:03.413056] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:29.933 00:42:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.933 "name": "raid_bdev1", 00:22:29.933 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:29.933 "strip_size_kb": 0, 00:22:29.933 "state": "online", 00:22:29.933 "raid_level": "raid1", 00:22:29.933 "superblock": true, 00:22:29.933 "num_base_bdevs": 2, 00:22:29.933 "num_base_bdevs_discovered": 2, 00:22:29.933 "num_base_bdevs_operational": 2, 00:22:29.933 "process": { 00:22:29.933 "type": "rebuild", 00:22:29.933 "target": "spare", 00:22:29.933 "progress": { 00:22:29.933 "blocks": 14336, 00:22:29.933 "percent": 22 00:22:29.933 } 00:22:29.933 }, 00:22:29.933 "base_bdevs_list": [ 00:22:29.933 { 00:22:29.933 "name": "spare", 00:22:29.933 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:29.933 "is_configured": true, 00:22:29.933 "data_offset": 2048, 00:22:29.933 "data_size": 63488 00:22:29.933 }, 00:22:29.933 { 00:22:29.933 "name": "BaseBdev2", 00:22:29.933 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:29.933 "is_configured": true, 00:22:29.933 "data_offset": 2048, 00:22:29.933 "data_size": 63488 00:22:29.933 } 00:22:29.933 ] 00:22:29.933 }' 00:22:29.933 00:42:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.933 00:42:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.933 00:42:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:30.192 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@657 -- # local timeout=473 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.192 [2024-04-27 00:42:03.755209] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:30.192 00:42:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.192 "name": "raid_bdev1", 00:22:30.192 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:30.192 "strip_size_kb": 0, 00:22:30.192 "state": "online", 00:22:30.192 "raid_level": "raid1", 00:22:30.192 "superblock": true, 00:22:30.192 "num_base_bdevs": 2, 00:22:30.192 "num_base_bdevs_discovered": 2, 00:22:30.192 "num_base_bdevs_operational": 2, 00:22:30.192 "process": { 00:22:30.192 "type": "rebuild", 00:22:30.192 "target": "spare", 00:22:30.192 "progress": { 00:22:30.192 "blocks": 20480, 00:22:30.192 "percent": 32 00:22:30.192 } 00:22:30.192 }, 00:22:30.192 "base_bdevs_list": [ 00:22:30.192 { 00:22:30.192 "name": "spare", 00:22:30.192 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:30.192 "is_configured": true, 00:22:30.192 "data_offset": 2048, 00:22:30.192 "data_size": 63488 00:22:30.192 }, 00:22:30.192 { 00:22:30.192 "name": "BaseBdev2", 00:22:30.192 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:30.192 "is_configured": true, 00:22:30.192 "data_offset": 2048, 00:22:30.193 "data_size": 63488 00:22:30.193 } 00:22:30.193 ] 00:22:30.193 }' 00:22:30.193 00:42:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.451 00:42:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.451 00:42:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.451 00:42:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.451 00:42:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:31.385 [2024-04-27 00:42:04.748121] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:31.385 [2024-04-27 00:42:04.855912] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.385 00:42:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.643 00:42:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.643 "name": "raid_bdev1", 00:22:31.643 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:31.643 "strip_size_kb": 0, 00:22:31.643 "state": "online", 00:22:31.643 "raid_level": "raid1", 00:22:31.643 "superblock": true, 00:22:31.643 "num_base_bdevs": 2, 00:22:31.643 "num_base_bdevs_discovered": 2, 00:22:31.643 "num_base_bdevs_operational": 2, 00:22:31.643 "process": { 00:22:31.643 "type": "rebuild", 00:22:31.643 "target": "spare", 00:22:31.643 "progress": { 00:22:31.643 "blocks": 43008, 00:22:31.643 "percent": 67 00:22:31.643 } 00:22:31.643 }, 00:22:31.643 "base_bdevs_list": [ 00:22:31.643 { 00:22:31.643 "name": "spare", 00:22:31.643 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:31.643 "is_configured": true, 00:22:31.643 "data_offset": 2048, 00:22:31.643 "data_size": 63488 00:22:31.643 }, 00:22:31.643 { 00:22:31.643 "name": "BaseBdev2", 00:22:31.643 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:31.643 "is_configured": true, 00:22:31.643 "data_offset": 2048, 00:22:31.643 "data_size": 63488 00:22:31.643 } 00:22:31.643 ] 00:22:31.643 }' 00:22:31.643 00:42:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.643 00:42:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.644 00:42:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.644 [2024-04-27 00:42:05.173045] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:31.644 00:42:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.644 00:42:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:31.901 [2024-04-27 00:42:05.280373] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:31.901 [2024-04-27 00:42:05.280656] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:32.159 [2024-04-27 00:42:05.605742] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:32.418 [2024-04-27 00:42:05.819150] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:32.418 [2024-04-27 00:42:05.819284] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.676 00:42:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.935 00:42:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.935 "name": "raid_bdev1", 00:22:32.935 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:32.935 "strip_size_kb": 0, 00:22:32.935 "state": "online", 00:22:32.935 "raid_level": "raid1", 00:22:32.935 "superblock": true, 00:22:32.935 "num_base_bdevs": 2, 00:22:32.935 "num_base_bdevs_discovered": 2, 00:22:32.935 "num_base_bdevs_operational": 2, 00:22:32.935 "process": { 00:22:32.935 "type": "rebuild", 00:22:32.935 "target": "spare", 00:22:32.935 "progress": { 00:22:32.935 "blocks": 61440, 00:22:32.935 "percent": 96 00:22:32.935 } 00:22:32.935 }, 00:22:32.935 "base_bdevs_list": [ 00:22:32.935 { 00:22:32.935 "name": "spare", 00:22:32.935 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:32.935 "is_configured": true, 00:22:32.935 "data_offset": 2048, 00:22:32.935 "data_size": 63488 00:22:32.935 }, 00:22:32.935 { 00:22:32.935 "name": "BaseBdev2", 00:22:32.935 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:32.935 "is_configured": true, 00:22:32.935 "data_offset": 2048, 00:22:32.935 "data_size": 63488 00:22:32.935 } 00:22:32.935 ] 00:22:32.935 }' 00:22:32.935 00:42:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.935 [2024-04-27 00:42:06.475915] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:32.935 00:42:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.935 00:42:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.194 00:42:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.194 00:42:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:33.194 [2024-04-27 00:42:06.581897] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:33.194 [2024-04-27 00:42:06.583908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.133 00:42:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.392 00:42:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.392 "name": "raid_bdev1", 00:22:34.392 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:34.392 "strip_size_kb": 0, 00:22:34.392 "state": "online", 00:22:34.392 "raid_level": "raid1", 00:22:34.392 "superblock": true, 00:22:34.392 "num_base_bdevs": 2, 00:22:34.392 "num_base_bdevs_discovered": 2, 00:22:34.392 "num_base_bdevs_operational": 2, 00:22:34.392 "base_bdevs_list": [ 00:22:34.392 { 00:22:34.392 "name": "spare", 00:22:34.392 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:34.392 "is_configured": true, 00:22:34.392 "data_offset": 2048, 00:22:34.392 "data_size": 63488 00:22:34.392 }, 00:22:34.392 { 00:22:34.392 "name": "BaseBdev2", 00:22:34.392 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:34.392 "is_configured": true, 00:22:34.392 "data_offset": 2048, 00:22:34.392 "data_size": 63488 00:22:34.392 } 00:22:34.392 ] 00:22:34.392 }' 00:22:34.392 00:42:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.392 00:42:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:34.392 00:42:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@660 -- # break 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.393 00:42:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.651 00:42:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.651 "name": "raid_bdev1", 00:22:34.651 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:34.651 "strip_size_kb": 0, 00:22:34.651 "state": "online", 00:22:34.651 "raid_level": "raid1", 00:22:34.651 "superblock": true, 00:22:34.651 "num_base_bdevs": 2, 00:22:34.651 "num_base_bdevs_discovered": 2, 00:22:34.651 "num_base_bdevs_operational": 2, 00:22:34.651 "base_bdevs_list": [ 00:22:34.651 { 00:22:34.651 "name": "spare", 00:22:34.651 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:34.651 "is_configured": true, 00:22:34.651 "data_offset": 2048, 00:22:34.651 "data_size": 63488 00:22:34.651 }, 00:22:34.651 { 00:22:34.651 "name": "BaseBdev2", 00:22:34.651 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:34.651 "is_configured": true, 00:22:34.651 "data_offset": 2048, 00:22:34.651 "data_size": 63488 00:22:34.651 } 00:22:34.651 ] 00:22:34.651 }' 00:22:34.651 00:42:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.651 00:42:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:34.651 00:42:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.910 "name": "raid_bdev1", 00:22:34.910 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:34.910 "strip_size_kb": 0, 00:22:34.910 "state": "online", 00:22:34.910 "raid_level": "raid1", 00:22:34.910 "superblock": true, 00:22:34.910 "num_base_bdevs": 2, 00:22:34.910 "num_base_bdevs_discovered": 2, 00:22:34.910 "num_base_bdevs_operational": 2, 00:22:34.910 "base_bdevs_list": [ 00:22:34.910 { 00:22:34.910 "name": "spare", 00:22:34.910 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:34.910 "is_configured": true, 00:22:34.910 "data_offset": 2048, 00:22:34.910 "data_size": 63488 00:22:34.910 }, 00:22:34.910 { 00:22:34.910 "name": "BaseBdev2", 00:22:34.910 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:34.910 "is_configured": true, 00:22:34.910 "data_offset": 2048, 00:22:34.910 "data_size": 63488 00:22:34.910 } 00:22:34.910 ] 00:22:34.910 }' 00:22:34.910 00:42:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.910 00:42:08 -- common/autotest_common.sh@10 -- # set +x 00:22:35.860 00:42:09 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:35.861 [2024-04-27 00:42:09.291360] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.861 [2024-04-27 00:42:09.291393] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.861 00:22:35.861 Latency(us) 00:22:35.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.861 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:35.861 raid_bdev1 : 11.87 108.09 324.28 0.00 0.00 12866.22 275.55 116296.61 00:22:35.861 =================================================================================================================== 00:22:35.861 Total : 108.09 324.28 0.00 0.00 12866.22 275.55 116296.61 00:22:35.861 [2024-04-27 00:42:09.398315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.861 [2024-04-27 00:42:09.398379] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.861 0 00:22:35.861 [2024-04-27 00:42:09.398468] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.861 [2024-04-27 00:42:09.398480] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:22:35.861 00:42:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.861 00:42:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:36.122 00:42:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:36.122 00:42:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:36.122 00:42:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.122 00:42:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:36.385 /dev/nbd0 00:22:36.385 00:42:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:36.385 00:42:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:36.385 00:42:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:36.385 00:42:09 -- common/autotest_common.sh@855 -- # local i 00:22:36.385 00:42:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:36.385 00:42:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:36.385 00:42:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:36.385 00:42:09 -- common/autotest_common.sh@859 -- # break 00:22:36.385 00:42:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:36.385 00:42:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:36.385 00:42:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.385 1+0 records in 00:22:36.385 1+0 records out 00:22:36.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546132 s, 7.5 MB/s 00:22:36.385 00:42:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.385 00:42:09 -- common/autotest_common.sh@872 -- # size=4096 00:22:36.385 00:42:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.385 00:42:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:36.386 00:42:09 -- common/autotest_common.sh@875 -- # return 0 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.386 00:42:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:36.386 00:42:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:22:36.386 00:42:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.386 00:42:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:22:36.644 /dev/nbd1 00:22:36.645 00:42:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.645 00:42:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.645 00:42:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:36.645 00:42:10 -- common/autotest_common.sh@855 -- # local i 00:22:36.645 00:42:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:36.645 00:42:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:36.645 00:42:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:36.645 00:42:10 -- common/autotest_common.sh@859 -- # break 00:22:36.645 00:42:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:36.645 00:42:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:36.645 00:42:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.645 1+0 records in 00:22:36.645 1+0 records out 00:22:36.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583822 s, 7.0 MB/s 00:22:36.645 00:42:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.645 00:42:10 -- common/autotest_common.sh@872 -- # size=4096 00:22:36.645 00:42:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.645 00:42:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:36.645 00:42:10 -- common/autotest_common.sh@875 -- # return 0 00:22:36.645 00:42:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.645 00:42:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.645 00:42:10 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:36.903 00:42:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:36.903 00:42:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.903 00:42:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:36.903 00:42:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.903 00:42:10 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.903 00:42:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.903 00:42:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@41 -- # break 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.162 00:42:10 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@51 -- # local i 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:37.162 00:42:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@41 -- # break 00:22:37.421 00:42:10 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.421 00:42:10 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:37.421 00:42:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.421 00:42:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:37.421 00:42:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:37.680 00:42:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:37.939 [2024-04-27 00:42:11.313403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:37.939 [2024-04-27 00:42:11.313517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.939 [2024-04-27 00:42:11.313559] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:22:37.939 [2024-04-27 00:42:11.313587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.939 [2024-04-27 00:42:11.316108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.939 [2024-04-27 00:42:11.316192] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:37.939 [2024-04-27 00:42:11.316329] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:37.939 [2024-04-27 00:42:11.316425] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.939 BaseBdev1 00:22:37.939 00:42:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.939 00:42:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:37.939 00:42:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:38.199 00:42:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:38.199 [2024-04-27 00:42:11.734170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:38.199 [2024-04-27 00:42:11.734288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.199 [2024-04-27 00:42:11.734329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:38.199 [2024-04-27 00:42:11.734377] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.199 [2024-04-27 00:42:11.734970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.199 [2024-04-27 00:42:11.735047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:38.199 [2024-04-27 00:42:11.735189] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:38.199 [2024-04-27 00:42:11.735204] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:38.199 [2024-04-27 00:42:11.735211] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.199 [2024-04-27 00:42:11.735231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:22:38.199 [2024-04-27 00:42:11.735294] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.199 BaseBdev2 00:22:38.199 00:42:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:38.458 00:42:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:38.717 [2024-04-27 00:42:12.138297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:38.717 [2024-04-27 00:42:12.138438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.717 [2024-04-27 00:42:12.138484] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:22:38.717 [2024-04-27 00:42:12.138511] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.717 [2024-04-27 00:42:12.139173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.717 [2024-04-27 00:42:12.139274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:38.717 [2024-04-27 00:42:12.139421] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:38.717 [2024-04-27 00:42:12.139447] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:38.717 spare 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.717 00:42:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.717 [2024-04-27 00:42:12.239598] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:22:38.717 [2024-04-27 00:42:12.239624] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:38.717 [2024-04-27 00:42:12.239774] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:22:38.717 [2024-04-27 00:42:12.240232] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:22:38.717 [2024-04-27 00:42:12.240255] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:22:38.717 [2024-04-27 00:42:12.240460] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.976 00:42:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:38.976 "name": "raid_bdev1", 00:22:38.976 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:38.976 "strip_size_kb": 0, 00:22:38.976 "state": "online", 00:22:38.976 "raid_level": "raid1", 00:22:38.976 "superblock": true, 00:22:38.976 "num_base_bdevs": 2, 00:22:38.976 "num_base_bdevs_discovered": 2, 00:22:38.976 "num_base_bdevs_operational": 2, 00:22:38.976 "base_bdevs_list": [ 00:22:38.976 { 00:22:38.976 "name": "spare", 00:22:38.976 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:38.976 "is_configured": true, 00:22:38.976 "data_offset": 2048, 00:22:38.976 "data_size": 63488 00:22:38.976 }, 00:22:38.976 { 00:22:38.976 "name": "BaseBdev2", 00:22:38.976 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:38.976 "is_configured": true, 00:22:38.976 "data_offset": 2048, 00:22:38.976 "data_size": 63488 00:22:38.976 } 00:22:38.976 ] 00:22:38.976 }' 00:22:38.976 00:42:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:38.976 00:42:12 -- common/autotest_common.sh@10 -- # set +x 00:22:39.543 00:42:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.543 00:42:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.543 00:42:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:39.543 00:42:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:39.543 00:42:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.543 00:42:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.543 00:42:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.801 00:42:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.801 "name": "raid_bdev1", 00:22:39.801 "uuid": "3e861614-47f4-459c-98e6-a60ec1976cfc", 00:22:39.801 "strip_size_kb": 0, 00:22:39.801 "state": "online", 00:22:39.801 "raid_level": "raid1", 00:22:39.801 "superblock": true, 00:22:39.801 "num_base_bdevs": 2, 00:22:39.801 "num_base_bdevs_discovered": 2, 00:22:39.801 "num_base_bdevs_operational": 2, 00:22:39.801 "base_bdevs_list": [ 00:22:39.801 { 00:22:39.801 "name": "spare", 00:22:39.801 "uuid": "15a99de8-e9d5-5997-b326-0709c8ddfb9a", 00:22:39.801 "is_configured": true, 00:22:39.801 "data_offset": 2048, 00:22:39.801 "data_size": 63488 00:22:39.801 }, 00:22:39.801 { 00:22:39.801 "name": "BaseBdev2", 00:22:39.801 "uuid": "f4532995-757e-54b1-b9b1-cb3be21a6d65", 00:22:39.801 "is_configured": true, 00:22:39.801 "data_offset": 2048, 00:22:39.801 "data_size": 63488 00:22:39.801 } 00:22:39.801 ] 00:22:39.801 }' 00:22:39.801 00:42:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:39.801 00:42:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:39.801 00:42:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.060 00:42:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:40.060 00:42:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.060 00:42:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:40.319 00:42:13 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.319 00:42:13 -- bdev/bdev_raid.sh@709 -- # killprocess 131840 00:22:40.319 00:42:13 -- common/autotest_common.sh@936 -- # '[' -z 131840 ']' 00:22:40.319 00:42:13 -- common/autotest_common.sh@940 -- # kill -0 131840 00:22:40.319 00:42:13 -- common/autotest_common.sh@941 -- # uname 00:22:40.319 00:42:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:40.319 00:42:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 131840 00:22:40.319 00:42:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:40.319 00:42:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:40.319 00:42:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 131840' 00:22:40.319 killing process with pid 131840 00:22:40.319 00:42:13 -- common/autotest_common.sh@955 -- # kill 131840 00:22:40.319 Received shutdown signal, test time was about 16.195681 seconds 00:22:40.319 00:22:40.319 Latency(us) 00:22:40.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.319 =================================================================================================================== 00:22:40.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.319 [2024-04-27 00:42:13.709752] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.319 [2024-04-27 00:42:13.709849] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.319 [2024-04-27 00:42:13.709959] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.319 [2024-04-27 00:42:13.709973] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:22:40.319 00:42:13 -- common/autotest_common.sh@960 -- # wait 131840 00:22:40.319 [2024-04-27 00:42:13.872817] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:41.696 ************************************ 00:22:41.696 END TEST raid_rebuild_test_sb_io 00:22:41.696 ************************************ 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:41.696 00:22:41.696 real 0m21.736s 00:22:41.696 user 0m34.439s 00:22:41.696 sys 0m2.334s 00:22:41.696 00:42:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:41.696 00:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:22:41.696 00:42:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:41.696 00:42:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:41.696 00:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:41.696 ************************************ 00:22:41.696 START TEST raid_rebuild_test 00:22:41.696 ************************************ 00:22:41.696 00:42:14 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false false 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=132406 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132406 /var/tmp/spdk-raid.sock 00:22:41.696 00:42:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:41.696 00:42:14 -- common/autotest_common.sh@817 -- # '[' -z 132406 ']' 00:22:41.696 00:42:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:41.696 00:42:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:41.696 00:42:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:41.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:41.696 00:42:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:41.696 00:42:14 -- common/autotest_common.sh@10 -- # set +x 00:22:41.696 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:41.696 Zero copy mechanism will not be used. 00:22:41.696 [2024-04-27 00:42:15.063694] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:22:41.696 [2024-04-27 00:42:15.063867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132406 ] 00:22:41.696 [2024-04-27 00:42:15.229638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.956 [2024-04-27 00:42:15.416473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.287 [2024-04-27 00:42:15.590761] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.546 00:42:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:42.546 00:42:16 -- common/autotest_common.sh@850 -- # return 0 00:22:42.546 00:42:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:42.546 00:42:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:42.546 00:42:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:42.805 BaseBdev1 00:22:42.805 00:42:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:42.805 00:42:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:42.805 00:42:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:43.064 BaseBdev2 00:22:43.064 00:42:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:43.064 00:42:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:43.064 00:42:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:43.325 BaseBdev3 00:22:43.325 00:42:16 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:43.325 00:42:16 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:43.325 00:42:16 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:43.584 BaseBdev4 00:22:43.584 00:42:17 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:43.844 spare_malloc 00:22:43.844 00:42:17 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:44.103 spare_delay 00:22:44.103 00:42:17 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:44.103 [2024-04-27 00:42:17.681148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:44.103 [2024-04-27 00:42:17.681252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.103 [2024-04-27 00:42:17.681288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:44.103 [2024-04-27 00:42:17.681334] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.103 [2024-04-27 00:42:17.683732] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.103 [2024-04-27 00:42:17.683801] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:44.103 spare 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:44.365 [2024-04-27 00:42:17.877202] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.365 [2024-04-27 00:42:17.879129] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:44.365 [2024-04-27 00:42:17.879192] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:44.365 [2024-04-27 00:42:17.879231] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:44.365 [2024-04-27 00:42:17.879301] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:22:44.365 [2024-04-27 00:42:17.879312] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:44.365 [2024-04-27 00:42:17.879485] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:44.365 [2024-04-27 00:42:17.879816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:22:44.365 [2024-04-27 00:42:17.879840] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:22:44.365 [2024-04-27 00:42:17.880021] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.365 00:42:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.624 00:42:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.624 "name": "raid_bdev1", 00:22:44.624 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:44.624 "strip_size_kb": 0, 00:22:44.624 "state": "online", 00:22:44.624 "raid_level": "raid1", 00:22:44.624 "superblock": false, 00:22:44.624 "num_base_bdevs": 4, 00:22:44.624 "num_base_bdevs_discovered": 4, 00:22:44.624 "num_base_bdevs_operational": 4, 00:22:44.624 "base_bdevs_list": [ 00:22:44.624 { 00:22:44.624 "name": "BaseBdev1", 00:22:44.624 "uuid": "c47c56f4-2626-4408-b9a2-5ad31c0fdb9a", 00:22:44.624 "is_configured": true, 00:22:44.624 "data_offset": 0, 00:22:44.624 "data_size": 65536 00:22:44.624 }, 00:22:44.624 { 00:22:44.624 "name": "BaseBdev2", 00:22:44.624 "uuid": "9d186963-aac7-4840-b2a3-6cc9f68834ab", 00:22:44.624 "is_configured": true, 00:22:44.624 "data_offset": 0, 00:22:44.624 "data_size": 65536 00:22:44.624 }, 00:22:44.624 { 00:22:44.624 "name": "BaseBdev3", 00:22:44.624 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:44.624 "is_configured": true, 00:22:44.624 "data_offset": 0, 00:22:44.624 "data_size": 65536 00:22:44.624 }, 00:22:44.624 { 00:22:44.624 "name": "BaseBdev4", 00:22:44.624 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:44.624 "is_configured": true, 00:22:44.624 "data_offset": 0, 00:22:44.624 "data_size": 65536 00:22:44.624 } 00:22:44.624 ] 00:22:44.624 }' 00:22:44.624 00:42:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.624 00:42:18 -- common/autotest_common.sh@10 -- # set +x 00:22:45.192 00:42:18 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:45.192 00:42:18 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:45.451 [2024-04-27 00:42:18.925718] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.451 00:42:18 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:45.451 00:42:18 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.451 00:42:18 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:45.710 00:42:19 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:45.710 00:42:19 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:45.710 00:42:19 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:45.710 00:42:19 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@12 -- # local i 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.710 00:42:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:45.969 [2024-04-27 00:42:19.373564] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:45.969 /dev/nbd0 00:22:45.969 00:42:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:45.969 00:42:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:45.969 00:42:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:45.969 00:42:19 -- common/autotest_common.sh@855 -- # local i 00:22:45.969 00:42:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:45.969 00:42:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:45.969 00:42:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:45.969 00:42:19 -- common/autotest_common.sh@859 -- # break 00:22:45.969 00:42:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:45.969 00:42:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:45.969 00:42:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:45.969 1+0 records in 00:22:45.969 1+0 records out 00:22:45.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252525 s, 16.2 MB/s 00:22:45.969 00:42:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.969 00:42:19 -- common/autotest_common.sh@872 -- # size=4096 00:22:45.969 00:42:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:45.969 00:42:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:45.969 00:42:19 -- common/autotest_common.sh@875 -- # return 0 00:22:45.969 00:42:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:45.969 00:42:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:45.969 00:42:19 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:45.969 00:42:19 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:45.969 00:42:19 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:52.563 65536+0 records in 00:22:52.563 65536+0 records out 00:22:52.563 33554432 bytes (34 MB, 32 MiB) copied, 5.69676 s, 5.9 MB/s 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@51 -- # local i 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:52.563 [2024-04-27 00:42:25.426451] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@41 -- # break 00:22:52.563 00:42:25 -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:52.563 [2024-04-27 00:42:25.662200] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.563 "name": "raid_bdev1", 00:22:52.563 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:52.563 "strip_size_kb": 0, 00:22:52.563 "state": "online", 00:22:52.563 "raid_level": "raid1", 00:22:52.563 "superblock": false, 00:22:52.563 "num_base_bdevs": 4, 00:22:52.563 "num_base_bdevs_discovered": 3, 00:22:52.563 "num_base_bdevs_operational": 3, 00:22:52.563 "base_bdevs_list": [ 00:22:52.563 { 00:22:52.563 "name": null, 00:22:52.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.563 "is_configured": false, 00:22:52.563 "data_offset": 0, 00:22:52.563 "data_size": 65536 00:22:52.563 }, 00:22:52.563 { 00:22:52.563 "name": "BaseBdev2", 00:22:52.563 "uuid": "9d186963-aac7-4840-b2a3-6cc9f68834ab", 00:22:52.563 "is_configured": true, 00:22:52.563 "data_offset": 0, 00:22:52.563 "data_size": 65536 00:22:52.563 }, 00:22:52.563 { 00:22:52.563 "name": "BaseBdev3", 00:22:52.563 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:52.563 "is_configured": true, 00:22:52.563 "data_offset": 0, 00:22:52.563 "data_size": 65536 00:22:52.563 }, 00:22:52.563 { 00:22:52.563 "name": "BaseBdev4", 00:22:52.563 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:52.563 "is_configured": true, 00:22:52.563 "data_offset": 0, 00:22:52.563 "data_size": 65536 00:22:52.563 } 00:22:52.563 ] 00:22:52.563 }' 00:22:52.563 00:42:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.563 00:42:25 -- common/autotest_common.sh@10 -- # set +x 00:22:53.130 00:42:26 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.388 [2024-04-27 00:42:26.798497] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:53.388 [2024-04-27 00:42:26.798787] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.388 [2024-04-27 00:42:26.809846] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:22:53.388 [2024-04-27 00:42:26.812038] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.388 00:42:26 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:54.325 00:42:27 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.325 00:42:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.325 00:42:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.325 00:42:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.325 00:42:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.325 00:42:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.325 00:42:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.583 00:42:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.583 "name": "raid_bdev1", 00:22:54.583 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:54.583 "strip_size_kb": 0, 00:22:54.583 "state": "online", 00:22:54.583 "raid_level": "raid1", 00:22:54.583 "superblock": false, 00:22:54.583 "num_base_bdevs": 4, 00:22:54.583 "num_base_bdevs_discovered": 4, 00:22:54.583 "num_base_bdevs_operational": 4, 00:22:54.583 "process": { 00:22:54.583 "type": "rebuild", 00:22:54.583 "target": "spare", 00:22:54.583 "progress": { 00:22:54.583 "blocks": 24576, 00:22:54.583 "percent": 37 00:22:54.583 } 00:22:54.583 }, 00:22:54.583 "base_bdevs_list": [ 00:22:54.583 { 00:22:54.583 "name": "spare", 00:22:54.583 "uuid": "28fec710-1515-54e3-b69d-90b706d2019c", 00:22:54.583 "is_configured": true, 00:22:54.583 "data_offset": 0, 00:22:54.583 "data_size": 65536 00:22:54.583 }, 00:22:54.583 { 00:22:54.583 "name": "BaseBdev2", 00:22:54.583 "uuid": "9d186963-aac7-4840-b2a3-6cc9f68834ab", 00:22:54.583 "is_configured": true, 00:22:54.583 "data_offset": 0, 00:22:54.583 "data_size": 65536 00:22:54.583 }, 00:22:54.583 { 00:22:54.583 "name": "BaseBdev3", 00:22:54.583 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:54.583 "is_configured": true, 00:22:54.583 "data_offset": 0, 00:22:54.583 "data_size": 65536 00:22:54.583 }, 00:22:54.583 { 00:22:54.583 "name": "BaseBdev4", 00:22:54.583 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:54.583 "is_configured": true, 00:22:54.583 "data_offset": 0, 00:22:54.583 "data_size": 65536 00:22:54.583 } 00:22:54.583 ] 00:22:54.583 }' 00:22:54.583 00:42:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.583 00:42:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.583 00:42:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.583 00:42:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.583 00:42:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:54.841 [2024-04-27 00:42:28.346359] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.841 [2024-04-27 00:42:28.420835] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.841 [2024-04-27 00:42:28.421112] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.100 00:42:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.358 00:42:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.358 "name": "raid_bdev1", 00:22:55.358 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:55.358 "strip_size_kb": 0, 00:22:55.358 "state": "online", 00:22:55.358 "raid_level": "raid1", 00:22:55.358 "superblock": false, 00:22:55.358 "num_base_bdevs": 4, 00:22:55.358 "num_base_bdevs_discovered": 3, 00:22:55.358 "num_base_bdevs_operational": 3, 00:22:55.358 "base_bdevs_list": [ 00:22:55.358 { 00:22:55.358 "name": null, 00:22:55.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.358 "is_configured": false, 00:22:55.358 "data_offset": 0, 00:22:55.358 "data_size": 65536 00:22:55.358 }, 00:22:55.358 { 00:22:55.358 "name": "BaseBdev2", 00:22:55.358 "uuid": "9d186963-aac7-4840-b2a3-6cc9f68834ab", 00:22:55.358 "is_configured": true, 00:22:55.358 "data_offset": 0, 00:22:55.358 "data_size": 65536 00:22:55.358 }, 00:22:55.358 { 00:22:55.358 "name": "BaseBdev3", 00:22:55.358 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:55.358 "is_configured": true, 00:22:55.358 "data_offset": 0, 00:22:55.358 "data_size": 65536 00:22:55.358 }, 00:22:55.358 { 00:22:55.358 "name": "BaseBdev4", 00:22:55.358 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:55.358 "is_configured": true, 00:22:55.359 "data_offset": 0, 00:22:55.359 "data_size": 65536 00:22:55.359 } 00:22:55.359 ] 00:22:55.359 }' 00:22:55.359 00:42:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.359 00:42:28 -- common/autotest_common.sh@10 -- # set +x 00:22:55.926 00:42:29 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.926 00:42:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.926 00:42:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:55.926 00:42:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:55.926 00:42:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:55.926 00:42:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.926 00:42:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.185 00:42:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:56.185 "name": "raid_bdev1", 00:22:56.185 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:56.185 "strip_size_kb": 0, 00:22:56.185 "state": "online", 00:22:56.185 "raid_level": "raid1", 00:22:56.185 "superblock": false, 00:22:56.185 "num_base_bdevs": 4, 00:22:56.185 "num_base_bdevs_discovered": 3, 00:22:56.185 "num_base_bdevs_operational": 3, 00:22:56.185 "base_bdevs_list": [ 00:22:56.185 { 00:22:56.185 "name": null, 00:22:56.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.185 "is_configured": false, 00:22:56.185 "data_offset": 0, 00:22:56.185 "data_size": 65536 00:22:56.185 }, 00:22:56.185 { 00:22:56.185 "name": "BaseBdev2", 00:22:56.185 "uuid": "9d186963-aac7-4840-b2a3-6cc9f68834ab", 00:22:56.185 "is_configured": true, 00:22:56.185 "data_offset": 0, 00:22:56.185 "data_size": 65536 00:22:56.185 }, 00:22:56.185 { 00:22:56.185 "name": "BaseBdev3", 00:22:56.185 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:56.185 "is_configured": true, 00:22:56.185 "data_offset": 0, 00:22:56.185 "data_size": 65536 00:22:56.185 }, 00:22:56.185 { 00:22:56.185 "name": "BaseBdev4", 00:22:56.185 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:56.185 "is_configured": true, 00:22:56.185 "data_offset": 0, 00:22:56.185 "data_size": 65536 00:22:56.185 } 00:22:56.185 ] 00:22:56.185 }' 00:22:56.185 00:42:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:56.185 00:42:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:56.185 00:42:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:56.185 00:42:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:56.185 00:42:29 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:56.443 [2024-04-27 00:42:29.894838] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:56.443 [2024-04-27 00:42:29.895084] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.443 [2024-04-27 00:42:29.906281] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:22:56.443 [2024-04-27 00:42:29.908536] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.443 00:42:29 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:57.379 00:42:30 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.379 00:42:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.379 00:42:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.379 00:42:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.379 00:42:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.379 00:42:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.379 00:42:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.637 00:42:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.637 "name": "raid_bdev1", 00:22:57.637 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:57.637 "strip_size_kb": 0, 00:22:57.637 "state": "online", 00:22:57.637 "raid_level": "raid1", 00:22:57.637 "superblock": false, 00:22:57.637 "num_base_bdevs": 4, 00:22:57.638 "num_base_bdevs_discovered": 4, 00:22:57.638 "num_base_bdevs_operational": 4, 00:22:57.638 "process": { 00:22:57.638 "type": "rebuild", 00:22:57.638 "target": "spare", 00:22:57.638 "progress": { 00:22:57.638 "blocks": 24576, 00:22:57.638 "percent": 37 00:22:57.638 } 00:22:57.638 }, 00:22:57.638 "base_bdevs_list": [ 00:22:57.638 { 00:22:57.638 "name": "spare", 00:22:57.638 "uuid": "28fec710-1515-54e3-b69d-90b706d2019c", 00:22:57.638 "is_configured": true, 00:22:57.638 "data_offset": 0, 00:22:57.638 "data_size": 65536 00:22:57.638 }, 00:22:57.638 { 00:22:57.638 "name": "BaseBdev2", 00:22:57.638 "uuid": "9d186963-aac7-4840-b2a3-6cc9f68834ab", 00:22:57.638 "is_configured": true, 00:22:57.638 "data_offset": 0, 00:22:57.638 "data_size": 65536 00:22:57.638 }, 00:22:57.638 { 00:22:57.638 "name": "BaseBdev3", 00:22:57.638 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:57.638 "is_configured": true, 00:22:57.638 "data_offset": 0, 00:22:57.638 "data_size": 65536 00:22:57.638 }, 00:22:57.638 { 00:22:57.638 "name": "BaseBdev4", 00:22:57.638 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:57.638 "is_configured": true, 00:22:57.638 "data_offset": 0, 00:22:57.638 "data_size": 65536 00:22:57.638 } 00:22:57.638 ] 00:22:57.638 }' 00:22:57.638 00:42:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.638 00:42:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.638 00:42:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.896 00:42:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.896 00:42:31 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:57.896 00:42:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:57.896 00:42:31 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:57.896 00:42:31 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:57.896 00:42:31 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:57.896 [2024-04-27 00:42:31.450379] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:58.155 [2024-04-27 00:42:31.517232] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.155 00:42:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.415 "name": "raid_bdev1", 00:22:58.415 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:58.415 "strip_size_kb": 0, 00:22:58.415 "state": "online", 00:22:58.415 "raid_level": "raid1", 00:22:58.415 "superblock": false, 00:22:58.415 "num_base_bdevs": 4, 00:22:58.415 "num_base_bdevs_discovered": 3, 00:22:58.415 "num_base_bdevs_operational": 3, 00:22:58.415 "process": { 00:22:58.415 "type": "rebuild", 00:22:58.415 "target": "spare", 00:22:58.415 "progress": { 00:22:58.415 "blocks": 36864, 00:22:58.415 "percent": 56 00:22:58.415 } 00:22:58.415 }, 00:22:58.415 "base_bdevs_list": [ 00:22:58.415 { 00:22:58.415 "name": "spare", 00:22:58.415 "uuid": "28fec710-1515-54e3-b69d-90b706d2019c", 00:22:58.415 "is_configured": true, 00:22:58.415 "data_offset": 0, 00:22:58.415 "data_size": 65536 00:22:58.415 }, 00:22:58.415 { 00:22:58.415 "name": null, 00:22:58.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.415 "is_configured": false, 00:22:58.415 "data_offset": 0, 00:22:58.415 "data_size": 65536 00:22:58.415 }, 00:22:58.415 { 00:22:58.415 "name": "BaseBdev3", 00:22:58.415 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:58.415 "is_configured": true, 00:22:58.415 "data_offset": 0, 00:22:58.415 "data_size": 65536 00:22:58.415 }, 00:22:58.415 { 00:22:58.415 "name": "BaseBdev4", 00:22:58.415 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:58.415 "is_configured": true, 00:22:58.415 "data_offset": 0, 00:22:58.415 "data_size": 65536 00:22:58.415 } 00:22:58.415 ] 00:22:58.415 }' 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@657 -- # local timeout=501 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.415 00:42:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.674 00:42:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.674 "name": "raid_bdev1", 00:22:58.674 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:58.674 "strip_size_kb": 0, 00:22:58.674 "state": "online", 00:22:58.674 "raid_level": "raid1", 00:22:58.674 "superblock": false, 00:22:58.674 "num_base_bdevs": 4, 00:22:58.674 "num_base_bdevs_discovered": 3, 00:22:58.674 "num_base_bdevs_operational": 3, 00:22:58.674 "process": { 00:22:58.674 "type": "rebuild", 00:22:58.674 "target": "spare", 00:22:58.674 "progress": { 00:22:58.674 "blocks": 43008, 00:22:58.674 "percent": 65 00:22:58.674 } 00:22:58.674 }, 00:22:58.674 "base_bdevs_list": [ 00:22:58.674 { 00:22:58.674 "name": "spare", 00:22:58.674 "uuid": "28fec710-1515-54e3-b69d-90b706d2019c", 00:22:58.675 "is_configured": true, 00:22:58.675 "data_offset": 0, 00:22:58.675 "data_size": 65536 00:22:58.675 }, 00:22:58.675 { 00:22:58.675 "name": null, 00:22:58.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.675 "is_configured": false, 00:22:58.675 "data_offset": 0, 00:22:58.675 "data_size": 65536 00:22:58.675 }, 00:22:58.675 { 00:22:58.675 "name": "BaseBdev3", 00:22:58.675 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:58.675 "is_configured": true, 00:22:58.675 "data_offset": 0, 00:22:58.675 "data_size": 65536 00:22:58.675 }, 00:22:58.675 { 00:22:58.675 "name": "BaseBdev4", 00:22:58.675 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:58.675 "is_configured": true, 00:22:58.675 "data_offset": 0, 00:22:58.675 "data_size": 65536 00:22:58.675 } 00:22:58.675 ] 00:22:58.675 }' 00:22:58.675 00:42:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.675 00:42:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.675 00:42:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.675 00:42:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.675 00:42:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:59.611 [2024-04-27 00:42:33.126043] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:59.611 [2024-04-27 00:42:33.126384] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:59.611 [2024-04-27 00:42:33.126596] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.611 00:42:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.869 00:42:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:59.869 "name": "raid_bdev1", 00:22:59.869 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:22:59.869 "strip_size_kb": 0, 00:22:59.869 "state": "online", 00:22:59.869 "raid_level": "raid1", 00:22:59.869 "superblock": false, 00:22:59.869 "num_base_bdevs": 4, 00:22:59.869 "num_base_bdevs_discovered": 3, 00:22:59.869 "num_base_bdevs_operational": 3, 00:22:59.869 "base_bdevs_list": [ 00:22:59.869 { 00:22:59.869 "name": "spare", 00:22:59.869 "uuid": "28fec710-1515-54e3-b69d-90b706d2019c", 00:22:59.869 "is_configured": true, 00:22:59.869 "data_offset": 0, 00:22:59.869 "data_size": 65536 00:22:59.869 }, 00:22:59.869 { 00:22:59.869 "name": null, 00:22:59.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.869 "is_configured": false, 00:22:59.869 "data_offset": 0, 00:22:59.869 "data_size": 65536 00:22:59.869 }, 00:22:59.869 { 00:22:59.869 "name": "BaseBdev3", 00:22:59.869 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:22:59.869 "is_configured": true, 00:22:59.869 "data_offset": 0, 00:22:59.869 "data_size": 65536 00:22:59.869 }, 00:22:59.869 { 00:22:59.869 "name": "BaseBdev4", 00:22:59.869 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:22:59.869 "is_configured": true, 00:22:59.869 "data_offset": 0, 00:22:59.869 "data_size": 65536 00:22:59.869 } 00:22:59.869 ] 00:22:59.869 }' 00:22:59.869 00:42:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.869 00:42:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@660 -- # break 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.127 00:42:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:00.385 "name": "raid_bdev1", 00:23:00.385 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:23:00.385 "strip_size_kb": 0, 00:23:00.385 "state": "online", 00:23:00.385 "raid_level": "raid1", 00:23:00.385 "superblock": false, 00:23:00.385 "num_base_bdevs": 4, 00:23:00.385 "num_base_bdevs_discovered": 3, 00:23:00.385 "num_base_bdevs_operational": 3, 00:23:00.385 "base_bdevs_list": [ 00:23:00.385 { 00:23:00.385 "name": "spare", 00:23:00.385 "uuid": "28fec710-1515-54e3-b69d-90b706d2019c", 00:23:00.385 "is_configured": true, 00:23:00.385 "data_offset": 0, 00:23:00.385 "data_size": 65536 00:23:00.385 }, 00:23:00.385 { 00:23:00.385 "name": null, 00:23:00.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.385 "is_configured": false, 00:23:00.385 "data_offset": 0, 00:23:00.385 "data_size": 65536 00:23:00.385 }, 00:23:00.385 { 00:23:00.385 "name": "BaseBdev3", 00:23:00.385 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:23:00.385 "is_configured": true, 00:23:00.385 "data_offset": 0, 00:23:00.385 "data_size": 65536 00:23:00.385 }, 00:23:00.385 { 00:23:00.385 "name": "BaseBdev4", 00:23:00.385 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:23:00.385 "is_configured": true, 00:23:00.385 "data_offset": 0, 00:23:00.385 "data_size": 65536 00:23:00.385 } 00:23:00.385 ] 00:23:00.385 }' 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.385 00:42:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.643 00:42:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.643 "name": "raid_bdev1", 00:23:00.643 "uuid": "143b4d34-160f-4b9b-86e4-25b702d0bb3f", 00:23:00.643 "strip_size_kb": 0, 00:23:00.643 "state": "online", 00:23:00.643 "raid_level": "raid1", 00:23:00.643 "superblock": false, 00:23:00.643 "num_base_bdevs": 4, 00:23:00.643 "num_base_bdevs_discovered": 3, 00:23:00.643 "num_base_bdevs_operational": 3, 00:23:00.643 "base_bdevs_list": [ 00:23:00.643 { 00:23:00.643 "name": "spare", 00:23:00.643 "uuid": "28fec710-1515-54e3-b69d-90b706d2019c", 00:23:00.643 "is_configured": true, 00:23:00.643 "data_offset": 0, 00:23:00.643 "data_size": 65536 00:23:00.643 }, 00:23:00.643 { 00:23:00.643 "name": null, 00:23:00.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.643 "is_configured": false, 00:23:00.643 "data_offset": 0, 00:23:00.643 "data_size": 65536 00:23:00.643 }, 00:23:00.643 { 00:23:00.643 "name": "BaseBdev3", 00:23:00.643 "uuid": "9e2b3702-c2e7-41f8-ac5d-8195913acf97", 00:23:00.643 "is_configured": true, 00:23:00.643 "data_offset": 0, 00:23:00.643 "data_size": 65536 00:23:00.643 }, 00:23:00.643 { 00:23:00.643 "name": "BaseBdev4", 00:23:00.644 "uuid": "00bea038-cfa9-4f76-822c-4b6112d43378", 00:23:00.644 "is_configured": true, 00:23:00.644 "data_offset": 0, 00:23:00.644 "data_size": 65536 00:23:00.644 } 00:23:00.644 ] 00:23:00.644 }' 00:23:00.644 00:42:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.644 00:42:34 -- common/autotest_common.sh@10 -- # set +x 00:23:01.211 00:42:34 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:01.470 [2024-04-27 00:42:34.923378] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:01.470 [2024-04-27 00:42:34.923556] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:01.470 [2024-04-27 00:42:34.923770] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:01.470 [2024-04-27 00:42:34.923938] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:01.470 [2024-04-27 00:42:34.924039] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:23:01.470 00:42:34 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.470 00:42:34 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:01.729 00:42:35 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:01.729 00:42:35 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:01.729 00:42:35 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@12 -- # local i 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:01.729 00:42:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:01.988 /dev/nbd0 00:23:01.988 00:42:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:01.988 00:42:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:01.988 00:42:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:01.988 00:42:35 -- common/autotest_common.sh@855 -- # local i 00:23:01.988 00:42:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:01.988 00:42:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:01.988 00:42:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:01.988 00:42:35 -- common/autotest_common.sh@859 -- # break 00:23:01.988 00:42:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:01.988 00:42:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:01.988 00:42:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.988 1+0 records in 00:23:01.988 1+0 records out 00:23:01.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562383 s, 7.3 MB/s 00:23:01.988 00:42:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.988 00:42:35 -- common/autotest_common.sh@872 -- # size=4096 00:23:01.988 00:42:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.988 00:42:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:01.988 00:42:35 -- common/autotest_common.sh@875 -- # return 0 00:23:01.988 00:42:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.988 00:42:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:01.988 00:42:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:02.247 /dev/nbd1 00:23:02.247 00:42:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:02.247 00:42:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:02.247 00:42:35 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:02.247 00:42:35 -- common/autotest_common.sh@855 -- # local i 00:23:02.247 00:42:35 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:02.247 00:42:35 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:02.247 00:42:35 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:02.247 00:42:35 -- common/autotest_common.sh@859 -- # break 00:23:02.247 00:42:35 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:02.247 00:42:35 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:02.247 00:42:35 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.247 1+0 records in 00:23:02.247 1+0 records out 00:23:02.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004809 s, 8.5 MB/s 00:23:02.247 00:42:35 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.247 00:42:35 -- common/autotest_common.sh@872 -- # size=4096 00:23:02.247 00:42:35 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.247 00:42:35 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:02.247 00:42:35 -- common/autotest_common.sh@875 -- # return 0 00:23:02.247 00:42:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:02.247 00:42:35 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:02.247 00:42:35 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:02.505 00:42:35 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:02.505 00:42:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:02.505 00:42:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:02.505 00:42:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:02.505 00:42:35 -- bdev/nbd_common.sh@51 -- # local i 00:23:02.505 00:42:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.505 00:42:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@41 -- # break 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.774 00:42:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@41 -- # break 00:23:03.051 00:42:36 -- bdev/nbd_common.sh@45 -- # return 0 00:23:03.051 00:42:36 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:03.051 00:42:36 -- bdev/bdev_raid.sh@709 -- # killprocess 132406 00:23:03.051 00:42:36 -- common/autotest_common.sh@936 -- # '[' -z 132406 ']' 00:23:03.051 00:42:36 -- common/autotest_common.sh@940 -- # kill -0 132406 00:23:03.051 00:42:36 -- common/autotest_common.sh@941 -- # uname 00:23:03.051 00:42:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.051 00:42:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132406 00:23:03.051 00:42:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:03.051 00:42:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:03.051 00:42:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132406' 00:23:03.051 killing process with pid 132406 00:23:03.051 00:42:36 -- common/autotest_common.sh@955 -- # kill 132406 00:23:03.051 Received shutdown signal, test time was about 60.000000 seconds 00:23:03.051 00:23:03.051 Latency(us) 00:23:03.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.051 =================================================================================================================== 00:23:03.051 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:03.051 [2024-04-27 00:42:36.532654] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:03.051 00:42:36 -- common/autotest_common.sh@960 -- # wait 132406 00:23:03.310 [2024-04-27 00:42:36.875389] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:04.687 ************************************ 00:23:04.687 END TEST raid_rebuild_test 00:23:04.687 ************************************ 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:04.687 00:23:04.687 real 0m22.889s 00:23:04.687 user 0m31.527s 00:23:04.687 sys 0m3.938s 00:23:04.687 00:42:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:04.687 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:23:04.687 00:42:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:04.687 00:42:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:04.687 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.687 ************************************ 00:23:04.687 START TEST raid_rebuild_test_sb 00:23:04.687 ************************************ 00:23:04.687 00:42:37 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true false 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@544 -- # raid_pid=132966 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132966 /var/tmp/spdk-raid.sock 00:23:04.687 00:42:37 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:04.687 00:42:37 -- common/autotest_common.sh@817 -- # '[' -z 132966 ']' 00:23:04.687 00:42:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:04.687 00:42:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:04.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:04.687 00:42:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:04.687 00:42:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:04.687 00:42:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.687 [2024-04-27 00:42:38.032939] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:04.687 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:04.687 Zero copy mechanism will not be used. 00:23:04.687 [2024-04-27 00:42:38.033142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132966 ] 00:23:04.687 [2024-04-27 00:42:38.186001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.947 [2024-04-27 00:42:38.368251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.206 [2024-04-27 00:42:38.544333] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:05.465 00:42:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:05.465 00:42:38 -- common/autotest_common.sh@850 -- # return 0 00:23:05.465 00:42:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:05.465 00:42:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:05.465 00:42:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:05.723 BaseBdev1_malloc 00:23:05.723 00:42:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:05.982 [2024-04-27 00:42:39.412402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:05.982 [2024-04-27 00:42:39.412521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.982 [2024-04-27 00:42:39.412554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:05.982 [2024-04-27 00:42:39.412597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.982 [2024-04-27 00:42:39.415103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.982 [2024-04-27 00:42:39.415165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:05.982 BaseBdev1 00:23:05.982 00:42:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:05.982 00:42:39 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:05.982 00:42:39 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:06.241 BaseBdev2_malloc 00:23:06.241 00:42:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:06.500 [2024-04-27 00:42:39.950846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:06.500 [2024-04-27 00:42:39.950939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:06.500 [2024-04-27 00:42:39.951023] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:06.500 [2024-04-27 00:42:39.951105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:06.500 [2024-04-27 00:42:39.953608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:06.500 [2024-04-27 00:42:39.953689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:06.500 BaseBdev2 00:23:06.500 00:42:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:06.500 00:42:39 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:06.500 00:42:39 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:06.759 BaseBdev3_malloc 00:23:06.759 00:42:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:07.017 [2024-04-27 00:42:40.423322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:07.017 [2024-04-27 00:42:40.423462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.017 [2024-04-27 00:42:40.423522] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:07.017 [2024-04-27 00:42:40.423566] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.017 [2024-04-27 00:42:40.426117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.017 [2024-04-27 00:42:40.426221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:07.017 BaseBdev3 00:23:07.017 00:42:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:07.017 00:42:40 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:07.017 00:42:40 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:07.276 BaseBdev4_malloc 00:23:07.276 00:42:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:07.536 [2024-04-27 00:42:40.962193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:07.536 [2024-04-27 00:42:40.962313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.536 [2024-04-27 00:42:40.962365] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:07.536 [2024-04-27 00:42:40.962427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.536 [2024-04-27 00:42:40.965022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.536 [2024-04-27 00:42:40.965106] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:07.536 BaseBdev4 00:23:07.536 00:42:40 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:07.795 spare_malloc 00:23:07.795 00:42:41 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:08.054 spare_delay 00:23:08.054 00:42:41 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:08.313 [2024-04-27 00:42:41.721142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:08.313 [2024-04-27 00:42:41.721250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.313 [2024-04-27 00:42:41.721286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:08.313 [2024-04-27 00:42:41.721331] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.313 [2024-04-27 00:42:41.723841] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.313 [2024-04-27 00:42:41.723917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:08.313 spare 00:23:08.313 00:42:41 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:08.571 [2024-04-27 00:42:41.941264] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.571 [2024-04-27 00:42:41.943520] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.571 [2024-04-27 00:42:41.943633] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:08.571 [2024-04-27 00:42:41.943696] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:08.571 [2024-04-27 00:42:41.943931] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:23:08.571 [2024-04-27 00:42:41.943945] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:08.571 [2024-04-27 00:42:41.944095] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:08.571 [2024-04-27 00:42:41.944470] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:23:08.572 [2024-04-27 00:42:41.944485] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:23:08.572 [2024-04-27 00:42:41.944644] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.572 00:42:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.830 00:42:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:08.830 "name": "raid_bdev1", 00:23:08.830 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:08.830 "strip_size_kb": 0, 00:23:08.830 "state": "online", 00:23:08.830 "raid_level": "raid1", 00:23:08.830 "superblock": true, 00:23:08.830 "num_base_bdevs": 4, 00:23:08.830 "num_base_bdevs_discovered": 4, 00:23:08.830 "num_base_bdevs_operational": 4, 00:23:08.830 "base_bdevs_list": [ 00:23:08.830 { 00:23:08.830 "name": "BaseBdev1", 00:23:08.830 "uuid": "275753ce-4e66-5f67-a0a6-7b0fe9be1ed4", 00:23:08.830 "is_configured": true, 00:23:08.830 "data_offset": 2048, 00:23:08.830 "data_size": 63488 00:23:08.830 }, 00:23:08.830 { 00:23:08.830 "name": "BaseBdev2", 00:23:08.830 "uuid": "58586261-09ab-57e5-b4ca-29f3c2c7a439", 00:23:08.830 "is_configured": true, 00:23:08.830 "data_offset": 2048, 00:23:08.830 "data_size": 63488 00:23:08.830 }, 00:23:08.830 { 00:23:08.830 "name": "BaseBdev3", 00:23:08.830 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:08.830 "is_configured": true, 00:23:08.830 "data_offset": 2048, 00:23:08.830 "data_size": 63488 00:23:08.830 }, 00:23:08.830 { 00:23:08.830 "name": "BaseBdev4", 00:23:08.830 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:08.830 "is_configured": true, 00:23:08.830 "data_offset": 2048, 00:23:08.830 "data_size": 63488 00:23:08.830 } 00:23:08.830 ] 00:23:08.830 }' 00:23:08.830 00:42:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:08.830 00:42:42 -- common/autotest_common.sh@10 -- # set +x 00:23:09.422 00:42:42 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:09.422 00:42:42 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:09.680 [2024-04-27 00:42:43.037719] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:09.680 00:42:43 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:09.680 00:42:43 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.680 00:42:43 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:09.680 00:42:43 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:09.680 00:42:43 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:09.938 00:42:43 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:09.938 00:42:43 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@12 -- # local i 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:09.938 [2024-04-27 00:42:43.453597] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:09.938 /dev/nbd0 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:09.938 00:42:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:09.938 00:42:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:09.938 00:42:43 -- common/autotest_common.sh@855 -- # local i 00:23:09.938 00:42:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:09.938 00:42:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:09.938 00:42:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:09.938 00:42:43 -- common/autotest_common.sh@859 -- # break 00:23:09.938 00:42:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:09.938 00:42:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:09.938 00:42:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.938 1+0 records in 00:23:09.938 1+0 records out 00:23:10.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710148 s, 5.8 MB/s 00:23:10.196 00:42:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.196 00:42:43 -- common/autotest_common.sh@872 -- # size=4096 00:23:10.196 00:42:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:10.196 00:42:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:10.196 00:42:43 -- common/autotest_common.sh@875 -- # return 0 00:23:10.196 00:42:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:10.196 00:42:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:10.196 00:42:43 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:23:10.196 00:42:43 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:23:10.196 00:42:43 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:23:16.761 63488+0 records in 00:23:16.761 63488+0 records out 00:23:16.761 32505856 bytes (33 MB, 31 MiB) copied, 6.24488 s, 5.2 MB/s 00:23:16.761 00:42:49 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:16.761 00:42:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:16.761 00:42:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:16.761 00:42:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:16.761 00:42:49 -- bdev/nbd_common.sh@51 -- # local i 00:23:16.761 00:42:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:16.761 00:42:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:16.761 [2024-04-27 00:42:49.998072] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:16.761 00:42:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:16.761 00:42:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:16.761 00:42:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:16.761 00:42:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:16.761 00:42:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:16.762 00:42:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:16.762 00:42:50 -- bdev/nbd_common.sh@41 -- # break 00:23:16.762 00:42:50 -- bdev/nbd_common.sh@45 -- # return 0 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:16.762 [2024-04-27 00:42:50.245740] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.762 00:42:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.019 00:42:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.019 "name": "raid_bdev1", 00:23:17.019 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:17.019 "strip_size_kb": 0, 00:23:17.019 "state": "online", 00:23:17.019 "raid_level": "raid1", 00:23:17.019 "superblock": true, 00:23:17.019 "num_base_bdevs": 4, 00:23:17.019 "num_base_bdevs_discovered": 3, 00:23:17.019 "num_base_bdevs_operational": 3, 00:23:17.019 "base_bdevs_list": [ 00:23:17.019 { 00:23:17.019 "name": null, 00:23:17.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.019 "is_configured": false, 00:23:17.019 "data_offset": 2048, 00:23:17.019 "data_size": 63488 00:23:17.019 }, 00:23:17.019 { 00:23:17.019 "name": "BaseBdev2", 00:23:17.019 "uuid": "58586261-09ab-57e5-b4ca-29f3c2c7a439", 00:23:17.019 "is_configured": true, 00:23:17.019 "data_offset": 2048, 00:23:17.019 "data_size": 63488 00:23:17.019 }, 00:23:17.019 { 00:23:17.019 "name": "BaseBdev3", 00:23:17.019 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:17.019 "is_configured": true, 00:23:17.019 "data_offset": 2048, 00:23:17.019 "data_size": 63488 00:23:17.019 }, 00:23:17.019 { 00:23:17.019 "name": "BaseBdev4", 00:23:17.019 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:17.019 "is_configured": true, 00:23:17.019 "data_offset": 2048, 00:23:17.019 "data_size": 63488 00:23:17.019 } 00:23:17.019 ] 00:23:17.019 }' 00:23:17.019 00:42:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.019 00:42:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.600 00:42:51 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:17.869 [2024-04-27 00:42:51.377968] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:17.869 [2024-04-27 00:42:51.378030] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:17.870 [2024-04-27 00:42:51.389657] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:23:17.870 [2024-04-27 00:42:51.391898] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:17.870 00:42:51 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:19.256 00:42:52 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.256 00:42:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:19.256 00:42:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:19.256 00:42:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:19.256 00:42:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:19.257 "name": "raid_bdev1", 00:23:19.257 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:19.257 "strip_size_kb": 0, 00:23:19.257 "state": "online", 00:23:19.257 "raid_level": "raid1", 00:23:19.257 "superblock": true, 00:23:19.257 "num_base_bdevs": 4, 00:23:19.257 "num_base_bdevs_discovered": 4, 00:23:19.257 "num_base_bdevs_operational": 4, 00:23:19.257 "process": { 00:23:19.257 "type": "rebuild", 00:23:19.257 "target": "spare", 00:23:19.257 "progress": { 00:23:19.257 "blocks": 24576, 00:23:19.257 "percent": 38 00:23:19.257 } 00:23:19.257 }, 00:23:19.257 "base_bdevs_list": [ 00:23:19.257 { 00:23:19.257 "name": "spare", 00:23:19.257 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:19.257 "is_configured": true, 00:23:19.257 "data_offset": 2048, 00:23:19.257 "data_size": 63488 00:23:19.257 }, 00:23:19.257 { 00:23:19.257 "name": "BaseBdev2", 00:23:19.257 "uuid": "58586261-09ab-57e5-b4ca-29f3c2c7a439", 00:23:19.257 "is_configured": true, 00:23:19.257 "data_offset": 2048, 00:23:19.257 "data_size": 63488 00:23:19.257 }, 00:23:19.257 { 00:23:19.257 "name": "BaseBdev3", 00:23:19.257 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:19.257 "is_configured": true, 00:23:19.257 "data_offset": 2048, 00:23:19.257 "data_size": 63488 00:23:19.257 }, 00:23:19.257 { 00:23:19.257 "name": "BaseBdev4", 00:23:19.257 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:19.257 "is_configured": true, 00:23:19.257 "data_offset": 2048, 00:23:19.257 "data_size": 63488 00:23:19.257 } 00:23:19.257 ] 00:23:19.257 }' 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:19.257 00:42:52 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:19.523 [2024-04-27 00:42:52.973975] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:19.523 [2024-04-27 00:42:53.000874] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:19.523 [2024-04-27 00:42:53.000980] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.523 00:42:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.784 00:42:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.784 "name": "raid_bdev1", 00:23:19.784 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:19.784 "strip_size_kb": 0, 00:23:19.784 "state": "online", 00:23:19.784 "raid_level": "raid1", 00:23:19.784 "superblock": true, 00:23:19.784 "num_base_bdevs": 4, 00:23:19.784 "num_base_bdevs_discovered": 3, 00:23:19.784 "num_base_bdevs_operational": 3, 00:23:19.784 "base_bdevs_list": [ 00:23:19.784 { 00:23:19.784 "name": null, 00:23:19.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.784 "is_configured": false, 00:23:19.784 "data_offset": 2048, 00:23:19.784 "data_size": 63488 00:23:19.784 }, 00:23:19.784 { 00:23:19.784 "name": "BaseBdev2", 00:23:19.784 "uuid": "58586261-09ab-57e5-b4ca-29f3c2c7a439", 00:23:19.784 "is_configured": true, 00:23:19.784 "data_offset": 2048, 00:23:19.784 "data_size": 63488 00:23:19.784 }, 00:23:19.784 { 00:23:19.784 "name": "BaseBdev3", 00:23:19.784 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:19.784 "is_configured": true, 00:23:19.784 "data_offset": 2048, 00:23:19.784 "data_size": 63488 00:23:19.784 }, 00:23:19.784 { 00:23:19.784 "name": "BaseBdev4", 00:23:19.784 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:19.784 "is_configured": true, 00:23:19.784 "data_offset": 2048, 00:23:19.784 "data_size": 63488 00:23:19.784 } 00:23:19.784 ] 00:23:19.784 }' 00:23:19.784 00:42:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.784 00:42:53 -- common/autotest_common.sh@10 -- # set +x 00:23:20.352 00:42:53 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:20.352 00:42:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:20.352 00:42:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:20.352 00:42:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:20.352 00:42:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:20.352 00:42:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.352 00:42:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.610 00:42:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:20.610 "name": "raid_bdev1", 00:23:20.610 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:20.610 "strip_size_kb": 0, 00:23:20.610 "state": "online", 00:23:20.610 "raid_level": "raid1", 00:23:20.610 "superblock": true, 00:23:20.610 "num_base_bdevs": 4, 00:23:20.610 "num_base_bdevs_discovered": 3, 00:23:20.610 "num_base_bdevs_operational": 3, 00:23:20.610 "base_bdevs_list": [ 00:23:20.610 { 00:23:20.610 "name": null, 00:23:20.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.610 "is_configured": false, 00:23:20.610 "data_offset": 2048, 00:23:20.610 "data_size": 63488 00:23:20.610 }, 00:23:20.610 { 00:23:20.610 "name": "BaseBdev2", 00:23:20.610 "uuid": "58586261-09ab-57e5-b4ca-29f3c2c7a439", 00:23:20.610 "is_configured": true, 00:23:20.610 "data_offset": 2048, 00:23:20.610 "data_size": 63488 00:23:20.610 }, 00:23:20.610 { 00:23:20.610 "name": "BaseBdev3", 00:23:20.610 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:20.610 "is_configured": true, 00:23:20.610 "data_offset": 2048, 00:23:20.610 "data_size": 63488 00:23:20.610 }, 00:23:20.610 { 00:23:20.610 "name": "BaseBdev4", 00:23:20.610 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:20.610 "is_configured": true, 00:23:20.610 "data_offset": 2048, 00:23:20.610 "data_size": 63488 00:23:20.610 } 00:23:20.610 ] 00:23:20.610 }' 00:23:20.610 00:42:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:20.610 00:42:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:20.610 00:42:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:20.869 00:42:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:20.869 00:42:54 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:20.869 [2024-04-27 00:42:54.430555] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:20.869 [2024-04-27 00:42:54.430627] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:20.869 [2024-04-27 00:42:54.441751] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:23:20.869 [2024-04-27 00:42:54.443972] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:20.869 00:42:54 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:22.242 "name": "raid_bdev1", 00:23:22.242 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:22.242 "strip_size_kb": 0, 00:23:22.242 "state": "online", 00:23:22.242 "raid_level": "raid1", 00:23:22.242 "superblock": true, 00:23:22.242 "num_base_bdevs": 4, 00:23:22.242 "num_base_bdevs_discovered": 4, 00:23:22.242 "num_base_bdevs_operational": 4, 00:23:22.242 "process": { 00:23:22.242 "type": "rebuild", 00:23:22.242 "target": "spare", 00:23:22.242 "progress": { 00:23:22.242 "blocks": 24576, 00:23:22.242 "percent": 38 00:23:22.242 } 00:23:22.242 }, 00:23:22.242 "base_bdevs_list": [ 00:23:22.242 { 00:23:22.242 "name": "spare", 00:23:22.242 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:22.242 "is_configured": true, 00:23:22.242 "data_offset": 2048, 00:23:22.242 "data_size": 63488 00:23:22.242 }, 00:23:22.242 { 00:23:22.242 "name": "BaseBdev2", 00:23:22.242 "uuid": "58586261-09ab-57e5-b4ca-29f3c2c7a439", 00:23:22.242 "is_configured": true, 00:23:22.242 "data_offset": 2048, 00:23:22.242 "data_size": 63488 00:23:22.242 }, 00:23:22.242 { 00:23:22.242 "name": "BaseBdev3", 00:23:22.242 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:22.242 "is_configured": true, 00:23:22.242 "data_offset": 2048, 00:23:22.242 "data_size": 63488 00:23:22.242 }, 00:23:22.242 { 00:23:22.242 "name": "BaseBdev4", 00:23:22.242 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:22.242 "is_configured": true, 00:23:22.242 "data_offset": 2048, 00:23:22.242 "data_size": 63488 00:23:22.242 } 00:23:22.242 ] 00:23:22.242 }' 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:22.242 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:22.242 00:42:55 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:22.500 [2024-04-27 00:42:56.022289] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:22.500 [2024-04-27 00:42:56.053010] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.758 00:42:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.016 "name": "raid_bdev1", 00:23:23.016 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:23.016 "strip_size_kb": 0, 00:23:23.016 "state": "online", 00:23:23.016 "raid_level": "raid1", 00:23:23.016 "superblock": true, 00:23:23.016 "num_base_bdevs": 4, 00:23:23.016 "num_base_bdevs_discovered": 3, 00:23:23.016 "num_base_bdevs_operational": 3, 00:23:23.016 "process": { 00:23:23.016 "type": "rebuild", 00:23:23.016 "target": "spare", 00:23:23.016 "progress": { 00:23:23.016 "blocks": 38912, 00:23:23.016 "percent": 61 00:23:23.016 } 00:23:23.016 }, 00:23:23.016 "base_bdevs_list": [ 00:23:23.016 { 00:23:23.016 "name": "spare", 00:23:23.016 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:23.016 "is_configured": true, 00:23:23.016 "data_offset": 2048, 00:23:23.016 "data_size": 63488 00:23:23.016 }, 00:23:23.016 { 00:23:23.016 "name": null, 00:23:23.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.016 "is_configured": false, 00:23:23.016 "data_offset": 2048, 00:23:23.016 "data_size": 63488 00:23:23.016 }, 00:23:23.016 { 00:23:23.016 "name": "BaseBdev3", 00:23:23.016 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:23.016 "is_configured": true, 00:23:23.016 "data_offset": 2048, 00:23:23.016 "data_size": 63488 00:23:23.016 }, 00:23:23.016 { 00:23:23.016 "name": "BaseBdev4", 00:23:23.016 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:23.016 "is_configured": true, 00:23:23.016 "data_offset": 2048, 00:23:23.016 "data_size": 63488 00:23:23.016 } 00:23:23.016 ] 00:23:23.016 }' 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@657 -- # local timeout=526 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.016 00:42:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.274 00:42:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:23.274 "name": "raid_bdev1", 00:23:23.274 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:23.274 "strip_size_kb": 0, 00:23:23.274 "state": "online", 00:23:23.274 "raid_level": "raid1", 00:23:23.274 "superblock": true, 00:23:23.274 "num_base_bdevs": 4, 00:23:23.274 "num_base_bdevs_discovered": 3, 00:23:23.274 "num_base_bdevs_operational": 3, 00:23:23.274 "process": { 00:23:23.274 "type": "rebuild", 00:23:23.274 "target": "spare", 00:23:23.274 "progress": { 00:23:23.274 "blocks": 47104, 00:23:23.274 "percent": 74 00:23:23.274 } 00:23:23.274 }, 00:23:23.274 "base_bdevs_list": [ 00:23:23.274 { 00:23:23.274 "name": "spare", 00:23:23.274 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:23.274 "is_configured": true, 00:23:23.274 "data_offset": 2048, 00:23:23.274 "data_size": 63488 00:23:23.274 }, 00:23:23.274 { 00:23:23.274 "name": null, 00:23:23.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.274 "is_configured": false, 00:23:23.274 "data_offset": 2048, 00:23:23.274 "data_size": 63488 00:23:23.274 }, 00:23:23.274 { 00:23:23.274 "name": "BaseBdev3", 00:23:23.274 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:23.274 "is_configured": true, 00:23:23.274 "data_offset": 2048, 00:23:23.274 "data_size": 63488 00:23:23.274 }, 00:23:23.274 { 00:23:23.274 "name": "BaseBdev4", 00:23:23.275 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:23.275 "is_configured": true, 00:23:23.275 "data_offset": 2048, 00:23:23.275 "data_size": 63488 00:23:23.275 } 00:23:23.275 ] 00:23:23.275 }' 00:23:23.275 00:42:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:23.533 00:42:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:23.533 00:42:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:23.533 00:42:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:23.533 00:42:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:24.099 [2024-04-27 00:42:57.561686] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:24.099 [2024-04-27 00:42:57.561778] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:24.099 [2024-04-27 00:42:57.561941] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.358 00:42:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.618 00:42:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:24.618 "name": "raid_bdev1", 00:23:24.618 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:24.618 "strip_size_kb": 0, 00:23:24.618 "state": "online", 00:23:24.618 "raid_level": "raid1", 00:23:24.618 "superblock": true, 00:23:24.618 "num_base_bdevs": 4, 00:23:24.618 "num_base_bdevs_discovered": 3, 00:23:24.618 "num_base_bdevs_operational": 3, 00:23:24.618 "base_bdevs_list": [ 00:23:24.618 { 00:23:24.618 "name": "spare", 00:23:24.618 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:24.618 "is_configured": true, 00:23:24.618 "data_offset": 2048, 00:23:24.618 "data_size": 63488 00:23:24.618 }, 00:23:24.618 { 00:23:24.618 "name": null, 00:23:24.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.618 "is_configured": false, 00:23:24.618 "data_offset": 2048, 00:23:24.618 "data_size": 63488 00:23:24.618 }, 00:23:24.618 { 00:23:24.618 "name": "BaseBdev3", 00:23:24.618 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:24.618 "is_configured": true, 00:23:24.618 "data_offset": 2048, 00:23:24.618 "data_size": 63488 00:23:24.618 }, 00:23:24.618 { 00:23:24.618 "name": "BaseBdev4", 00:23:24.618 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:24.618 "is_configured": true, 00:23:24.618 "data_offset": 2048, 00:23:24.618 "data_size": 63488 00:23:24.618 } 00:23:24.618 ] 00:23:24.618 }' 00:23:24.619 00:42:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@660 -- # break 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.880 00:42:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:25.138 "name": "raid_bdev1", 00:23:25.138 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:25.138 "strip_size_kb": 0, 00:23:25.138 "state": "online", 00:23:25.138 "raid_level": "raid1", 00:23:25.138 "superblock": true, 00:23:25.138 "num_base_bdevs": 4, 00:23:25.138 "num_base_bdevs_discovered": 3, 00:23:25.138 "num_base_bdevs_operational": 3, 00:23:25.138 "base_bdevs_list": [ 00:23:25.138 { 00:23:25.138 "name": "spare", 00:23:25.138 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:25.138 "is_configured": true, 00:23:25.138 "data_offset": 2048, 00:23:25.138 "data_size": 63488 00:23:25.138 }, 00:23:25.138 { 00:23:25.138 "name": null, 00:23:25.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.138 "is_configured": false, 00:23:25.138 "data_offset": 2048, 00:23:25.138 "data_size": 63488 00:23:25.138 }, 00:23:25.138 { 00:23:25.138 "name": "BaseBdev3", 00:23:25.138 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:25.138 "is_configured": true, 00:23:25.138 "data_offset": 2048, 00:23:25.138 "data_size": 63488 00:23:25.138 }, 00:23:25.138 { 00:23:25.138 "name": "BaseBdev4", 00:23:25.138 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:25.138 "is_configured": true, 00:23:25.138 "data_offset": 2048, 00:23:25.138 "data_size": 63488 00:23:25.138 } 00:23:25.138 ] 00:23:25.138 }' 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.138 00:42:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.397 00:42:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.397 "name": "raid_bdev1", 00:23:25.397 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:25.397 "strip_size_kb": 0, 00:23:25.397 "state": "online", 00:23:25.397 "raid_level": "raid1", 00:23:25.397 "superblock": true, 00:23:25.397 "num_base_bdevs": 4, 00:23:25.397 "num_base_bdevs_discovered": 3, 00:23:25.397 "num_base_bdevs_operational": 3, 00:23:25.397 "base_bdevs_list": [ 00:23:25.397 { 00:23:25.397 "name": "spare", 00:23:25.397 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:25.397 "is_configured": true, 00:23:25.397 "data_offset": 2048, 00:23:25.397 "data_size": 63488 00:23:25.397 }, 00:23:25.397 { 00:23:25.397 "name": null, 00:23:25.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.397 "is_configured": false, 00:23:25.397 "data_offset": 2048, 00:23:25.397 "data_size": 63488 00:23:25.397 }, 00:23:25.397 { 00:23:25.397 "name": "BaseBdev3", 00:23:25.397 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:25.397 "is_configured": true, 00:23:25.397 "data_offset": 2048, 00:23:25.397 "data_size": 63488 00:23:25.397 }, 00:23:25.397 { 00:23:25.397 "name": "BaseBdev4", 00:23:25.397 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:25.397 "is_configured": true, 00:23:25.397 "data_offset": 2048, 00:23:25.397 "data_size": 63488 00:23:25.397 } 00:23:25.397 ] 00:23:25.397 }' 00:23:25.397 00:42:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.397 00:42:58 -- common/autotest_common.sh@10 -- # set +x 00:23:25.964 00:42:59 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:26.223 [2024-04-27 00:42:59.787643] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.223 [2024-04-27 00:42:59.787694] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.223 [2024-04-27 00:42:59.787821] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.223 [2024-04-27 00:42:59.787933] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.223 [2024-04-27 00:42:59.787963] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:23:26.223 00:42:59 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.223 00:42:59 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:26.790 00:43:00 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:26.790 00:43:00 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:26.790 00:43:00 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@12 -- # local i 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:26.790 00:43:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:26.790 /dev/nbd0 00:23:27.048 00:43:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:27.048 00:43:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:27.048 00:43:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:27.048 00:43:00 -- common/autotest_common.sh@855 -- # local i 00:23:27.048 00:43:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:27.048 00:43:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:27.048 00:43:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:27.048 00:43:00 -- common/autotest_common.sh@859 -- # break 00:23:27.048 00:43:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:27.048 00:43:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:27.048 00:43:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.048 1+0 records in 00:23:27.048 1+0 records out 00:23:27.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245833 s, 16.7 MB/s 00:23:27.048 00:43:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.048 00:43:00 -- common/autotest_common.sh@872 -- # size=4096 00:23:27.048 00:43:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.048 00:43:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:27.048 00:43:00 -- common/autotest_common.sh@875 -- # return 0 00:23:27.048 00:43:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.048 00:43:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.048 00:43:00 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:27.306 /dev/nbd1 00:23:27.306 00:43:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:27.306 00:43:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:27.306 00:43:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:27.306 00:43:00 -- common/autotest_common.sh@855 -- # local i 00:23:27.306 00:43:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:27.306 00:43:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:27.306 00:43:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:27.306 00:43:00 -- common/autotest_common.sh@859 -- # break 00:23:27.306 00:43:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:27.306 00:43:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:27.306 00:43:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:27.306 1+0 records in 00:23:27.306 1+0 records out 00:23:27.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442247 s, 9.3 MB/s 00:23:27.306 00:43:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.306 00:43:00 -- common/autotest_common.sh@872 -- # size=4096 00:23:27.306 00:43:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:27.306 00:43:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:27.306 00:43:00 -- common/autotest_common.sh@875 -- # return 0 00:23:27.306 00:43:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:27.306 00:43:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:27.306 00:43:00 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:27.564 00:43:00 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:27.564 00:43:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:27.564 00:43:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:27.564 00:43:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:27.564 00:43:00 -- bdev/nbd_common.sh@51 -- # local i 00:23:27.564 00:43:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.564 00:43:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@41 -- # break 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@45 -- # return 0 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:27.564 00:43:01 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@41 -- # break 00:23:28.130 00:43:01 -- bdev/nbd_common.sh@45 -- # return 0 00:23:28.130 00:43:01 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:28.130 00:43:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:28.130 00:43:01 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:28.130 00:43:01 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:28.130 00:43:01 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:28.388 [2024-04-27 00:43:01.825105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:28.389 [2024-04-27 00:43:01.825354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.389 [2024-04-27 00:43:01.825437] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:28.389 [2024-04-27 00:43:01.825674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.389 [2024-04-27 00:43:01.828147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.389 [2024-04-27 00:43:01.828348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:28.389 [2024-04-27 00:43:01.828587] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:28.389 [2024-04-27 00:43:01.828756] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.389 BaseBdev1 00:23:28.389 00:43:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:28.389 00:43:01 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:28.389 00:43:01 -- bdev/bdev_raid.sh@696 -- # continue 00:23:28.389 00:43:01 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:28.389 00:43:01 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:28.389 00:43:01 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:28.646 00:43:02 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:28.903 [2024-04-27 00:43:02.305302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:28.903 [2024-04-27 00:43:02.305567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.903 [2024-04-27 00:43:02.305658] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:28.903 [2024-04-27 00:43:02.305902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.903 [2024-04-27 00:43:02.306591] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.903 [2024-04-27 00:43:02.306814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:28.903 [2024-04-27 00:43:02.307068] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:28.903 [2024-04-27 00:43:02.307207] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:28.903 [2024-04-27 00:43:02.307321] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:28.903 [2024-04-27 00:43:02.307388] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:23:28.903 [2024-04-27 00:43:02.307702] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.903 BaseBdev3 00:23:28.903 00:43:02 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:28.903 00:43:02 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:28.903 00:43:02 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:29.161 00:43:02 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:29.161 [2024-04-27 00:43:02.737400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:29.161 [2024-04-27 00:43:02.737662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.161 [2024-04-27 00:43:02.737752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:29.161 [2024-04-27 00:43:02.738000] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.161 [2024-04-27 00:43:02.738611] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.161 [2024-04-27 00:43:02.738826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:29.161 [2024-04-27 00:43:02.739081] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:29.161 [2024-04-27 00:43:02.739214] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:29.161 BaseBdev4 00:23:29.419 00:43:02 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:29.419 00:43:02 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:29.677 [2024-04-27 00:43:03.153504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:29.677 [2024-04-27 00:43:03.153647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.677 [2024-04-27 00:43:03.153719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:29.677 [2024-04-27 00:43:03.153751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.677 [2024-04-27 00:43:03.154357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.677 [2024-04-27 00:43:03.154454] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:29.677 [2024-04-27 00:43:03.154585] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:29.677 [2024-04-27 00:43:03.154638] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.677 spare 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.677 00:43:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.677 [2024-04-27 00:43:03.254846] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:23:29.677 [2024-04-27 00:43:03.254890] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:29.677 [2024-04-27 00:43:03.255119] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:23:29.677 [2024-04-27 00:43:03.255638] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:23:29.677 [2024-04-27 00:43:03.255665] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:23:29.677 [2024-04-27 00:43:03.255861] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.934 00:43:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.934 "name": "raid_bdev1", 00:23:29.934 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:29.934 "strip_size_kb": 0, 00:23:29.934 "state": "online", 00:23:29.934 "raid_level": "raid1", 00:23:29.934 "superblock": true, 00:23:29.934 "num_base_bdevs": 4, 00:23:29.934 "num_base_bdevs_discovered": 3, 00:23:29.934 "num_base_bdevs_operational": 3, 00:23:29.934 "base_bdevs_list": [ 00:23:29.934 { 00:23:29.934 "name": "spare", 00:23:29.934 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:29.934 "is_configured": true, 00:23:29.934 "data_offset": 2048, 00:23:29.934 "data_size": 63488 00:23:29.934 }, 00:23:29.934 { 00:23:29.934 "name": null, 00:23:29.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.934 "is_configured": false, 00:23:29.934 "data_offset": 2048, 00:23:29.934 "data_size": 63488 00:23:29.934 }, 00:23:29.934 { 00:23:29.934 "name": "BaseBdev3", 00:23:29.934 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:29.934 "is_configured": true, 00:23:29.934 "data_offset": 2048, 00:23:29.934 "data_size": 63488 00:23:29.934 }, 00:23:29.934 { 00:23:29.934 "name": "BaseBdev4", 00:23:29.934 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:29.934 "is_configured": true, 00:23:29.934 "data_offset": 2048, 00:23:29.934 "data_size": 63488 00:23:29.934 } 00:23:29.934 ] 00:23:29.934 }' 00:23:29.934 00:43:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.934 00:43:03 -- common/autotest_common.sh@10 -- # set +x 00:23:30.498 00:43:04 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:30.498 00:43:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:30.498 00:43:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:30.498 00:43:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:30.498 00:43:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:30.498 00:43:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.498 00:43:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.755 00:43:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:30.755 "name": "raid_bdev1", 00:23:30.755 "uuid": "b9b6b872-0853-418e-8e75-eeca3ad1796a", 00:23:30.755 "strip_size_kb": 0, 00:23:30.755 "state": "online", 00:23:30.755 "raid_level": "raid1", 00:23:30.755 "superblock": true, 00:23:30.755 "num_base_bdevs": 4, 00:23:30.755 "num_base_bdevs_discovered": 3, 00:23:30.755 "num_base_bdevs_operational": 3, 00:23:30.755 "base_bdevs_list": [ 00:23:30.755 { 00:23:30.755 "name": "spare", 00:23:30.755 "uuid": "c6e8aff1-a075-5eda-864b-b9edeb349ff3", 00:23:30.755 "is_configured": true, 00:23:30.755 "data_offset": 2048, 00:23:30.755 "data_size": 63488 00:23:30.755 }, 00:23:30.755 { 00:23:30.755 "name": null, 00:23:30.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.755 "is_configured": false, 00:23:30.755 "data_offset": 2048, 00:23:30.755 "data_size": 63488 00:23:30.755 }, 00:23:30.755 { 00:23:30.755 "name": "BaseBdev3", 00:23:30.755 "uuid": "7a49efcf-27b8-559c-8e59-8a854b4d612e", 00:23:30.755 "is_configured": true, 00:23:30.756 "data_offset": 2048, 00:23:30.756 "data_size": 63488 00:23:30.756 }, 00:23:30.756 { 00:23:30.756 "name": "BaseBdev4", 00:23:30.756 "uuid": "d4784d26-613a-516c-b746-e9b3df899c28", 00:23:30.756 "is_configured": true, 00:23:30.756 "data_offset": 2048, 00:23:30.756 "data_size": 63488 00:23:30.756 } 00:23:30.756 ] 00:23:30.756 }' 00:23:30.756 00:43:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:30.756 00:43:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:30.756 00:43:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:31.013 00:43:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:31.013 00:43:04 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.013 00:43:04 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:31.279 00:43:04 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:31.279 00:43:04 -- bdev/bdev_raid.sh@709 -- # killprocess 132966 00:23:31.279 00:43:04 -- common/autotest_common.sh@936 -- # '[' -z 132966 ']' 00:23:31.279 00:43:04 -- common/autotest_common.sh@940 -- # kill -0 132966 00:23:31.279 00:43:04 -- common/autotest_common.sh@941 -- # uname 00:23:31.279 00:43:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.279 00:43:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 132966 00:23:31.279 killing process with pid 132966 00:23:31.279 Received shutdown signal, test time was about 60.000000 seconds 00:23:31.279 00:23:31.279 Latency(us) 00:23:31.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.279 =================================================================================================================== 00:23:31.279 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.279 00:43:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:31.279 00:43:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:31.279 00:43:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 132966' 00:23:31.279 00:43:04 -- common/autotest_common.sh@955 -- # kill 132966 00:23:31.279 00:43:04 -- common/autotest_common.sh@960 -- # wait 132966 00:23:31.279 [2024-04-27 00:43:04.680279] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:31.279 [2024-04-27 00:43:04.680364] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.279 [2024-04-27 00:43:04.680481] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.279 [2024-04-27 00:43:04.680494] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:23:31.555 [2024-04-27 00:43:05.020980] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:32.490 ************************************ 00:23:32.490 END TEST raid_rebuild_test_sb 00:23:32.490 ************************************ 00:23:32.490 00:43:05 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:32.490 00:23:32.490 real 0m28.034s 00:23:32.490 user 0m40.812s 00:23:32.490 sys 0m4.090s 00:23:32.490 00:43:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:32.490 00:43:05 -- common/autotest_common.sh@10 -- # set +x 00:23:32.490 00:43:06 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:23:32.490 00:43:06 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:32.490 00:43:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:32.490 00:43:06 -- common/autotest_common.sh@10 -- # set +x 00:23:32.749 ************************************ 00:23:32.749 START TEST raid_rebuild_test_io 00:23:32.749 ************************************ 00:23:32.749 00:43:06 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 false true 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@544 -- # raid_pid=133630 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:32.749 00:43:06 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133630 /var/tmp/spdk-raid.sock 00:23:32.749 00:43:06 -- common/autotest_common.sh@817 -- # '[' -z 133630 ']' 00:23:32.749 00:43:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:32.749 00:43:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:32.749 00:43:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:32.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:32.749 00:43:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:32.749 00:43:06 -- common/autotest_common.sh@10 -- # set +x 00:23:32.749 [2024-04-27 00:43:06.165077] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:32.749 [2024-04-27 00:43:06.165279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133630 ] 00:23:32.749 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:32.749 Zero copy mechanism will not be used. 00:23:32.749 [2024-04-27 00:43:06.324664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.010 [2024-04-27 00:43:06.517469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.268 [2024-04-27 00:43:06.693108] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:33.526 00:43:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:33.526 00:43:07 -- common/autotest_common.sh@850 -- # return 0 00:23:33.526 00:43:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:33.526 00:43:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:33.526 00:43:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:33.784 BaseBdev1 00:23:33.784 00:43:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:33.784 00:43:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:33.784 00:43:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.042 BaseBdev2 00:23:34.042 00:43:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:34.042 00:43:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:34.042 00:43:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.300 BaseBdev3 00:23:34.300 00:43:07 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:34.300 00:43:07 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:34.300 00:43:07 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:34.558 BaseBdev4 00:23:34.558 00:43:08 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:34.816 spare_malloc 00:23:34.816 00:43:08 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:35.074 spare_delay 00:23:35.074 00:43:08 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:35.332 [2024-04-27 00:43:08.800988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:35.333 [2024-04-27 00:43:08.801119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.333 [2024-04-27 00:43:08.801162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:23:35.333 [2024-04-27 00:43:08.801210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.333 [2024-04-27 00:43:08.803803] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.333 [2024-04-27 00:43:08.803877] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:35.333 spare 00:23:35.333 00:43:08 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:35.591 [2024-04-27 00:43:09.021086] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:35.591 [2024-04-27 00:43:09.023285] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.591 [2024-04-27 00:43:09.023348] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:35.591 [2024-04-27 00:43:09.023388] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:35.591 [2024-04-27 00:43:09.023518] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:23:35.591 [2024-04-27 00:43:09.023531] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:35.591 [2024-04-27 00:43:09.023675] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:35.591 [2024-04-27 00:43:09.024065] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:23:35.591 [2024-04-27 00:43:09.024086] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:23:35.591 [2024-04-27 00:43:09.024288] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.591 00:43:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.849 00:43:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:35.849 "name": "raid_bdev1", 00:23:35.849 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:35.849 "strip_size_kb": 0, 00:23:35.849 "state": "online", 00:23:35.849 "raid_level": "raid1", 00:23:35.849 "superblock": false, 00:23:35.849 "num_base_bdevs": 4, 00:23:35.849 "num_base_bdevs_discovered": 4, 00:23:35.849 "num_base_bdevs_operational": 4, 00:23:35.849 "base_bdevs_list": [ 00:23:35.849 { 00:23:35.849 "name": "BaseBdev1", 00:23:35.849 "uuid": "63fcf6b1-b1df-4ccc-b199-8aa2adc42be7", 00:23:35.849 "is_configured": true, 00:23:35.849 "data_offset": 0, 00:23:35.849 "data_size": 65536 00:23:35.849 }, 00:23:35.849 { 00:23:35.849 "name": "BaseBdev2", 00:23:35.849 "uuid": "21795b26-1126-496e-8c81-98c9cdb664ef", 00:23:35.849 "is_configured": true, 00:23:35.849 "data_offset": 0, 00:23:35.849 "data_size": 65536 00:23:35.849 }, 00:23:35.849 { 00:23:35.849 "name": "BaseBdev3", 00:23:35.849 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:35.849 "is_configured": true, 00:23:35.849 "data_offset": 0, 00:23:35.849 "data_size": 65536 00:23:35.849 }, 00:23:35.849 { 00:23:35.849 "name": "BaseBdev4", 00:23:35.849 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:35.849 "is_configured": true, 00:23:35.849 "data_offset": 0, 00:23:35.849 "data_size": 65536 00:23:35.849 } 00:23:35.849 ] 00:23:35.849 }' 00:23:35.849 00:43:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:35.849 00:43:09 -- common/autotest_common.sh@10 -- # set +x 00:23:36.414 00:43:09 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:36.414 00:43:09 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:36.672 [2024-04-27 00:43:10.085714] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:36.672 00:43:10 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:23:36.672 00:43:10 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.672 00:43:10 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:36.931 00:43:10 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:36.931 00:43:10 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:36.931 00:43:10 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:36.931 00:43:10 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:36.931 [2024-04-27 00:43:10.432303] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:36.931 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:36.931 Zero copy mechanism will not be used. 00:23:36.931 Running I/O for 60 seconds... 00:23:37.190 [2024-04-27 00:43:10.591253] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:37.190 [2024-04-27 00:43:10.597566] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.190 00:43:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.448 00:43:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.448 "name": "raid_bdev1", 00:23:37.448 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:37.448 "strip_size_kb": 0, 00:23:37.448 "state": "online", 00:23:37.448 "raid_level": "raid1", 00:23:37.448 "superblock": false, 00:23:37.448 "num_base_bdevs": 4, 00:23:37.448 "num_base_bdevs_discovered": 3, 00:23:37.448 "num_base_bdevs_operational": 3, 00:23:37.448 "base_bdevs_list": [ 00:23:37.448 { 00:23:37.448 "name": null, 00:23:37.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.448 "is_configured": false, 00:23:37.448 "data_offset": 0, 00:23:37.448 "data_size": 65536 00:23:37.448 }, 00:23:37.448 { 00:23:37.448 "name": "BaseBdev2", 00:23:37.448 "uuid": "21795b26-1126-496e-8c81-98c9cdb664ef", 00:23:37.448 "is_configured": true, 00:23:37.448 "data_offset": 0, 00:23:37.448 "data_size": 65536 00:23:37.448 }, 00:23:37.448 { 00:23:37.448 "name": "BaseBdev3", 00:23:37.448 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:37.448 "is_configured": true, 00:23:37.448 "data_offset": 0, 00:23:37.448 "data_size": 65536 00:23:37.448 }, 00:23:37.448 { 00:23:37.448 "name": "BaseBdev4", 00:23:37.448 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:37.448 "is_configured": true, 00:23:37.448 "data_offset": 0, 00:23:37.448 "data_size": 65536 00:23:37.448 } 00:23:37.448 ] 00:23:37.448 }' 00:23:37.448 00:43:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.448 00:43:10 -- common/autotest_common.sh@10 -- # set +x 00:23:38.028 00:43:11 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:38.287 [2024-04-27 00:43:11.814294] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:38.287 [2024-04-27 00:43:11.814391] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.287 00:43:11 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:38.287 [2024-04-27 00:43:11.857112] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:38.287 [2024-04-27 00:43:11.859298] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:38.546 [2024-04-27 00:43:11.977263] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:38.546 [2024-04-27 00:43:11.978411] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:38.805 [2024-04-27 00:43:12.187257] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:38.805 [2024-04-27 00:43:12.187644] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:39.372 [2024-04-27 00:43:12.847444] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:39.372 00:43:12 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.372 00:43:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:39.372 00:43:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:39.373 00:43:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:39.373 00:43:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:39.373 00:43:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.373 00:43:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.630 [2024-04-27 00:43:12.975454] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:39.630 [2024-04-27 00:43:12.976205] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:39.630 00:43:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.630 "name": "raid_bdev1", 00:23:39.630 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:39.630 "strip_size_kb": 0, 00:23:39.630 "state": "online", 00:23:39.630 "raid_level": "raid1", 00:23:39.630 "superblock": false, 00:23:39.630 "num_base_bdevs": 4, 00:23:39.630 "num_base_bdevs_discovered": 4, 00:23:39.630 "num_base_bdevs_operational": 4, 00:23:39.630 "process": { 00:23:39.630 "type": "rebuild", 00:23:39.630 "target": "spare", 00:23:39.630 "progress": { 00:23:39.630 "blocks": 16384, 00:23:39.630 "percent": 25 00:23:39.630 } 00:23:39.630 }, 00:23:39.630 "base_bdevs_list": [ 00:23:39.630 { 00:23:39.630 "name": "spare", 00:23:39.630 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:39.630 "is_configured": true, 00:23:39.630 "data_offset": 0, 00:23:39.630 "data_size": 65536 00:23:39.630 }, 00:23:39.630 { 00:23:39.630 "name": "BaseBdev2", 00:23:39.630 "uuid": "21795b26-1126-496e-8c81-98c9cdb664ef", 00:23:39.630 "is_configured": true, 00:23:39.630 "data_offset": 0, 00:23:39.630 "data_size": 65536 00:23:39.630 }, 00:23:39.630 { 00:23:39.630 "name": "BaseBdev3", 00:23:39.630 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:39.630 "is_configured": true, 00:23:39.630 "data_offset": 0, 00:23:39.630 "data_size": 65536 00:23:39.630 }, 00:23:39.630 { 00:23:39.630 "name": "BaseBdev4", 00:23:39.630 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:39.630 "is_configured": true, 00:23:39.630 "data_offset": 0, 00:23:39.630 "data_size": 65536 00:23:39.630 } 00:23:39.630 ] 00:23:39.630 }' 00:23:39.630 00:43:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.630 00:43:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.630 00:43:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.888 00:43:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.888 00:43:13 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:39.889 [2024-04-27 00:43:13.407360] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:39.889 [2024-04-27 00:43:13.467958] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:40.147 [2024-04-27 00:43:13.511257] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:40.147 [2024-04-27 00:43:13.633389] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:40.147 [2024-04-27 00:43:13.650316] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.147 [2024-04-27 00:43:13.676867] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.147 00:43:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.405 00:43:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.405 "name": "raid_bdev1", 00:23:40.405 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:40.405 "strip_size_kb": 0, 00:23:40.405 "state": "online", 00:23:40.405 "raid_level": "raid1", 00:23:40.405 "superblock": false, 00:23:40.405 "num_base_bdevs": 4, 00:23:40.405 "num_base_bdevs_discovered": 3, 00:23:40.405 "num_base_bdevs_operational": 3, 00:23:40.405 "base_bdevs_list": [ 00:23:40.405 { 00:23:40.405 "name": null, 00:23:40.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.405 "is_configured": false, 00:23:40.405 "data_offset": 0, 00:23:40.405 "data_size": 65536 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "name": "BaseBdev2", 00:23:40.405 "uuid": "21795b26-1126-496e-8c81-98c9cdb664ef", 00:23:40.405 "is_configured": true, 00:23:40.405 "data_offset": 0, 00:23:40.405 "data_size": 65536 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "name": "BaseBdev3", 00:23:40.405 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:40.405 "is_configured": true, 00:23:40.405 "data_offset": 0, 00:23:40.405 "data_size": 65536 00:23:40.405 }, 00:23:40.405 { 00:23:40.405 "name": "BaseBdev4", 00:23:40.405 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:40.405 "is_configured": true, 00:23:40.405 "data_offset": 0, 00:23:40.405 "data_size": 65536 00:23:40.405 } 00:23:40.405 ] 00:23:40.405 }' 00:23:40.405 00:43:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.405 00:43:13 -- common/autotest_common.sh@10 -- # set +x 00:23:41.341 00:43:14 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:41.341 00:43:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:41.341 00:43:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:41.341 00:43:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:41.341 00:43:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:41.341 00:43:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.341 00:43:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.600 00:43:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:41.600 "name": "raid_bdev1", 00:23:41.600 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:41.600 "strip_size_kb": 0, 00:23:41.600 "state": "online", 00:23:41.600 "raid_level": "raid1", 00:23:41.600 "superblock": false, 00:23:41.600 "num_base_bdevs": 4, 00:23:41.600 "num_base_bdevs_discovered": 3, 00:23:41.600 "num_base_bdevs_operational": 3, 00:23:41.600 "base_bdevs_list": [ 00:23:41.600 { 00:23:41.600 "name": null, 00:23:41.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.600 "is_configured": false, 00:23:41.600 "data_offset": 0, 00:23:41.600 "data_size": 65536 00:23:41.600 }, 00:23:41.600 { 00:23:41.600 "name": "BaseBdev2", 00:23:41.600 "uuid": "21795b26-1126-496e-8c81-98c9cdb664ef", 00:23:41.600 "is_configured": true, 00:23:41.600 "data_offset": 0, 00:23:41.600 "data_size": 65536 00:23:41.600 }, 00:23:41.600 { 00:23:41.600 "name": "BaseBdev3", 00:23:41.600 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:41.600 "is_configured": true, 00:23:41.600 "data_offset": 0, 00:23:41.600 "data_size": 65536 00:23:41.600 }, 00:23:41.600 { 00:23:41.600 "name": "BaseBdev4", 00:23:41.600 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:41.600 "is_configured": true, 00:23:41.600 "data_offset": 0, 00:23:41.600 "data_size": 65536 00:23:41.600 } 00:23:41.600 ] 00:23:41.600 }' 00:23:41.600 00:43:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:41.600 00:43:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:41.600 00:43:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:41.600 00:43:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:41.600 00:43:15 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:41.859 [2024-04-27 00:43:15.296605] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:41.859 [2024-04-27 00:43:15.296673] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.859 00:43:15 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:41.859 [2024-04-27 00:43:15.349130] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:41.859 [2024-04-27 00:43:15.351422] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:42.118 [2024-04-27 00:43:15.479329] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:42.118 [2024-04-27 00:43:15.698961] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:42.118 [2024-04-27 00:43:15.699224] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:42.377 [2024-04-27 00:43:15.931173] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:42.377 [2024-04-27 00:43:15.932555] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:42.636 [2024-04-27 00:43:16.160622] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:42.636 [2024-04-27 00:43:16.161305] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:42.895 00:43:16 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.895 00:43:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:42.895 00:43:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:42.895 00:43:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:42.895 00:43:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:42.895 00:43:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.895 00:43:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.154 "name": "raid_bdev1", 00:23:43.154 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:43.154 "strip_size_kb": 0, 00:23:43.154 "state": "online", 00:23:43.154 "raid_level": "raid1", 00:23:43.154 "superblock": false, 00:23:43.154 "num_base_bdevs": 4, 00:23:43.154 "num_base_bdevs_discovered": 4, 00:23:43.154 "num_base_bdevs_operational": 4, 00:23:43.154 "process": { 00:23:43.154 "type": "rebuild", 00:23:43.154 "target": "spare", 00:23:43.154 "progress": { 00:23:43.154 "blocks": 14336, 00:23:43.154 "percent": 21 00:23:43.154 } 00:23:43.154 }, 00:23:43.154 "base_bdevs_list": [ 00:23:43.154 { 00:23:43.154 "name": "spare", 00:23:43.154 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:43.154 "is_configured": true, 00:23:43.154 "data_offset": 0, 00:23:43.154 "data_size": 65536 00:23:43.154 }, 00:23:43.154 { 00:23:43.154 "name": "BaseBdev2", 00:23:43.154 "uuid": "21795b26-1126-496e-8c81-98c9cdb664ef", 00:23:43.154 "is_configured": true, 00:23:43.154 "data_offset": 0, 00:23:43.154 "data_size": 65536 00:23:43.154 }, 00:23:43.154 { 00:23:43.154 "name": "BaseBdev3", 00:23:43.154 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:43.154 "is_configured": true, 00:23:43.154 "data_offset": 0, 00:23:43.154 "data_size": 65536 00:23:43.154 }, 00:23:43.154 { 00:23:43.154 "name": "BaseBdev4", 00:23:43.154 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:43.154 "is_configured": true, 00:23:43.154 "data_offset": 0, 00:23:43.154 "data_size": 65536 00:23:43.154 } 00:23:43.154 ] 00:23:43.154 }' 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:43.154 00:43:16 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:43.413 [2024-04-27 00:43:16.812167] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:43.413 [2024-04-27 00:43:16.812648] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:43.413 [2024-04-27 00:43:16.903089] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:43.671 [2024-04-27 00:43:17.046018] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:43.671 [2024-04-27 00:43:17.156060] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:23:43.671 [2024-04-27 00:43:17.156128] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005e10 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.671 00:43:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.930 [2024-04-27 00:43:17.385464] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:43.930 [2024-04-27 00:43:17.386105] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:43.930 00:43:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.930 "name": "raid_bdev1", 00:23:43.930 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:43.930 "strip_size_kb": 0, 00:23:43.930 "state": "online", 00:23:43.930 "raid_level": "raid1", 00:23:43.930 "superblock": false, 00:23:43.930 "num_base_bdevs": 4, 00:23:43.930 "num_base_bdevs_discovered": 3, 00:23:43.930 "num_base_bdevs_operational": 3, 00:23:43.930 "process": { 00:23:43.930 "type": "rebuild", 00:23:43.930 "target": "spare", 00:23:43.930 "progress": { 00:23:43.930 "blocks": 26624, 00:23:43.930 "percent": 40 00:23:43.930 } 00:23:43.930 }, 00:23:43.930 "base_bdevs_list": [ 00:23:43.930 { 00:23:43.930 "name": "spare", 00:23:43.930 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:43.930 "is_configured": true, 00:23:43.930 "data_offset": 0, 00:23:43.930 "data_size": 65536 00:23:43.930 }, 00:23:43.930 { 00:23:43.930 "name": null, 00:23:43.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.930 "is_configured": false, 00:23:43.930 "data_offset": 0, 00:23:43.930 "data_size": 65536 00:23:43.930 }, 00:23:43.930 { 00:23:43.930 "name": "BaseBdev3", 00:23:43.930 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:43.930 "is_configured": true, 00:23:43.930 "data_offset": 0, 00:23:43.930 "data_size": 65536 00:23:43.930 }, 00:23:43.930 { 00:23:43.930 "name": "BaseBdev4", 00:23:43.930 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:43.930 "is_configured": true, 00:23:43.930 "data_offset": 0, 00:23:43.930 "data_size": 65536 00:23:43.930 } 00:23:43.930 ] 00:23:43.930 }' 00:23:43.930 00:43:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:43.930 00:43:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.930 00:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.930 [2024-04-27 00:43:17.509597] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@657 -- # local timeout=547 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.188 00:43:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.447 00:43:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:44.447 "name": "raid_bdev1", 00:23:44.447 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:44.447 "strip_size_kb": 0, 00:23:44.447 "state": "online", 00:23:44.447 "raid_level": "raid1", 00:23:44.447 "superblock": false, 00:23:44.447 "num_base_bdevs": 4, 00:23:44.447 "num_base_bdevs_discovered": 3, 00:23:44.447 "num_base_bdevs_operational": 3, 00:23:44.447 "process": { 00:23:44.447 "type": "rebuild", 00:23:44.447 "target": "spare", 00:23:44.447 "progress": { 00:23:44.447 "blocks": 30720, 00:23:44.447 "percent": 46 00:23:44.447 } 00:23:44.447 }, 00:23:44.447 "base_bdevs_list": [ 00:23:44.447 { 00:23:44.447 "name": "spare", 00:23:44.447 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:44.447 "is_configured": true, 00:23:44.447 "data_offset": 0, 00:23:44.447 "data_size": 65536 00:23:44.447 }, 00:23:44.447 { 00:23:44.447 "name": null, 00:23:44.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.447 "is_configured": false, 00:23:44.447 "data_offset": 0, 00:23:44.447 "data_size": 65536 00:23:44.447 }, 00:23:44.447 { 00:23:44.447 "name": "BaseBdev3", 00:23:44.447 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:44.447 "is_configured": true, 00:23:44.447 "data_offset": 0, 00:23:44.447 "data_size": 65536 00:23:44.447 }, 00:23:44.447 { 00:23:44.447 "name": "BaseBdev4", 00:23:44.447 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:44.447 "is_configured": true, 00:23:44.447 "data_offset": 0, 00:23:44.447 "data_size": 65536 00:23:44.447 } 00:23:44.447 ] 00:23:44.447 }' 00:23:44.447 00:43:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.447 00:43:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.447 00:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.447 00:43:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.447 00:43:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:44.447 [2024-04-27 00:43:17.947350] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:44.447 [2024-04-27 00:43:17.947594] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:45.014 [2024-04-27 00:43:18.562163] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:45.014 [2024-04-27 00:43:18.562770] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.594 00:43:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.880 00:43:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:45.880 "name": "raid_bdev1", 00:23:45.880 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:45.880 "strip_size_kb": 0, 00:23:45.880 "state": "online", 00:23:45.880 "raid_level": "raid1", 00:23:45.880 "superblock": false, 00:23:45.880 "num_base_bdevs": 4, 00:23:45.880 "num_base_bdevs_discovered": 3, 00:23:45.880 "num_base_bdevs_operational": 3, 00:23:45.880 "process": { 00:23:45.880 "type": "rebuild", 00:23:45.880 "target": "spare", 00:23:45.880 "progress": { 00:23:45.880 "blocks": 55296, 00:23:45.880 "percent": 84 00:23:45.880 } 00:23:45.880 }, 00:23:45.880 "base_bdevs_list": [ 00:23:45.880 { 00:23:45.880 "name": "spare", 00:23:45.880 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:45.880 "is_configured": true, 00:23:45.880 "data_offset": 0, 00:23:45.880 "data_size": 65536 00:23:45.880 }, 00:23:45.880 { 00:23:45.880 "name": null, 00:23:45.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.880 "is_configured": false, 00:23:45.880 "data_offset": 0, 00:23:45.880 "data_size": 65536 00:23:45.880 }, 00:23:45.880 { 00:23:45.880 "name": "BaseBdev3", 00:23:45.880 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:45.880 "is_configured": true, 00:23:45.880 "data_offset": 0, 00:23:45.880 "data_size": 65536 00:23:45.880 }, 00:23:45.880 { 00:23:45.880 "name": "BaseBdev4", 00:23:45.880 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:45.880 "is_configured": true, 00:23:45.880 "data_offset": 0, 00:23:45.881 "data_size": 65536 00:23:45.881 } 00:23:45.881 ] 00:23:45.881 }' 00:23:45.881 00:43:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:45.881 00:43:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:45.881 00:43:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:45.881 00:43:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:45.881 00:43:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:46.139 [2024-04-27 00:43:19.674879] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:46.398 [2024-04-27 00:43:19.774902] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:46.398 [2024-04-27 00:43:19.784441] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:46.966 "name": "raid_bdev1", 00:23:46.966 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:46.966 "strip_size_kb": 0, 00:23:46.966 "state": "online", 00:23:46.966 "raid_level": "raid1", 00:23:46.966 "superblock": false, 00:23:46.966 "num_base_bdevs": 4, 00:23:46.966 "num_base_bdevs_discovered": 3, 00:23:46.966 "num_base_bdevs_operational": 3, 00:23:46.966 "base_bdevs_list": [ 00:23:46.966 { 00:23:46.966 "name": "spare", 00:23:46.966 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:46.966 "is_configured": true, 00:23:46.966 "data_offset": 0, 00:23:46.966 "data_size": 65536 00:23:46.966 }, 00:23:46.966 { 00:23:46.966 "name": null, 00:23:46.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.966 "is_configured": false, 00:23:46.966 "data_offset": 0, 00:23:46.966 "data_size": 65536 00:23:46.966 }, 00:23:46.966 { 00:23:46.966 "name": "BaseBdev3", 00:23:46.966 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:46.966 "is_configured": true, 00:23:46.966 "data_offset": 0, 00:23:46.966 "data_size": 65536 00:23:46.966 }, 00:23:46.966 { 00:23:46.966 "name": "BaseBdev4", 00:23:46.966 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:46.966 "is_configured": true, 00:23:46.966 "data_offset": 0, 00:23:46.966 "data_size": 65536 00:23:46.966 } 00:23:46.966 ] 00:23:46.966 }' 00:23:46.966 00:43:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@660 -- # break 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.225 00:43:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.483 00:43:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:47.483 "name": "raid_bdev1", 00:23:47.483 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:47.483 "strip_size_kb": 0, 00:23:47.483 "state": "online", 00:23:47.483 "raid_level": "raid1", 00:23:47.483 "superblock": false, 00:23:47.483 "num_base_bdevs": 4, 00:23:47.483 "num_base_bdevs_discovered": 3, 00:23:47.483 "num_base_bdevs_operational": 3, 00:23:47.483 "base_bdevs_list": [ 00:23:47.483 { 00:23:47.483 "name": "spare", 00:23:47.483 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:47.483 "is_configured": true, 00:23:47.483 "data_offset": 0, 00:23:47.483 "data_size": 65536 00:23:47.483 }, 00:23:47.483 { 00:23:47.483 "name": null, 00:23:47.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.483 "is_configured": false, 00:23:47.483 "data_offset": 0, 00:23:47.483 "data_size": 65536 00:23:47.483 }, 00:23:47.483 { 00:23:47.483 "name": "BaseBdev3", 00:23:47.483 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:47.483 "is_configured": true, 00:23:47.483 "data_offset": 0, 00:23:47.483 "data_size": 65536 00:23:47.483 }, 00:23:47.483 { 00:23:47.483 "name": "BaseBdev4", 00:23:47.483 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:47.483 "is_configured": true, 00:23:47.483 "data_offset": 0, 00:23:47.483 "data_size": 65536 00:23:47.483 } 00:23:47.483 ] 00:23:47.483 }' 00:23:47.483 00:43:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.483 00:43:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:47.483 00:43:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.483 00:43:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.740 00:43:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.740 "name": "raid_bdev1", 00:23:47.740 "uuid": "cc621877-b65e-4b12-9f3d-a0bfb220814e", 00:23:47.740 "strip_size_kb": 0, 00:23:47.740 "state": "online", 00:23:47.740 "raid_level": "raid1", 00:23:47.740 "superblock": false, 00:23:47.740 "num_base_bdevs": 4, 00:23:47.740 "num_base_bdevs_discovered": 3, 00:23:47.741 "num_base_bdevs_operational": 3, 00:23:47.741 "base_bdevs_list": [ 00:23:47.741 { 00:23:47.741 "name": "spare", 00:23:47.741 "uuid": "f2e3c959-d407-55be-8b8b-970ef3a7f691", 00:23:47.741 "is_configured": true, 00:23:47.741 "data_offset": 0, 00:23:47.741 "data_size": 65536 00:23:47.741 }, 00:23:47.741 { 00:23:47.741 "name": null, 00:23:47.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.741 "is_configured": false, 00:23:47.741 "data_offset": 0, 00:23:47.741 "data_size": 65536 00:23:47.741 }, 00:23:47.741 { 00:23:47.741 "name": "BaseBdev3", 00:23:47.741 "uuid": "aa259f9c-0d07-4304-9bb5-56aa733fb23b", 00:23:47.741 "is_configured": true, 00:23:47.741 "data_offset": 0, 00:23:47.741 "data_size": 65536 00:23:47.741 }, 00:23:47.741 { 00:23:47.741 "name": "BaseBdev4", 00:23:47.741 "uuid": "923b92d4-35c4-46a6-b14e-a685855e742c", 00:23:47.741 "is_configured": true, 00:23:47.741 "data_offset": 0, 00:23:47.741 "data_size": 65536 00:23:47.741 } 00:23:47.741 ] 00:23:47.741 }' 00:23:47.741 00:43:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.741 00:43:21 -- common/autotest_common.sh@10 -- # set +x 00:23:48.676 00:43:21 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:48.676 [2024-04-27 00:43:22.209125] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:48.676 [2024-04-27 00:43:22.209170] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.934 00:23:48.934 Latency(us) 00:23:48.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.934 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:48.934 raid_bdev1 : 11.86 106.81 320.42 0.00 0.00 13240.51 271.83 117726.49 00:23:48.934 =================================================================================================================== 00:23:48.934 Total : 106.81 320.42 0.00 0.00 13240.51 271.83 117726.49 00:23:48.934 [2024-04-27 00:43:22.315319] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.934 [2024-04-27 00:43:22.315386] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.934 [2024-04-27 00:43:22.315473] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.934 [2024-04-27 00:43:22.315484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:23:48.935 0 00:23:48.935 00:43:22 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.935 00:43:22 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:49.193 00:43:22 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:49.193 00:43:22 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:49.193 00:43:22 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@12 -- # local i 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:49.193 00:43:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:49.451 /dev/nbd0 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:49.451 00:43:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:49.451 00:43:22 -- common/autotest_common.sh@855 -- # local i 00:23:49.451 00:43:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:49.451 00:43:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:49.451 00:43:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:49.451 00:43:22 -- common/autotest_common.sh@859 -- # break 00:23:49.451 00:43:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:49.451 00:43:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:49.451 00:43:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:49.451 1+0 records in 00:23:49.451 1+0 records out 00:23:49.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430317 s, 9.5 MB/s 00:23:49.451 00:43:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.451 00:43:22 -- common/autotest_common.sh@872 -- # size=4096 00:23:49.451 00:43:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.451 00:43:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:49.451 00:43:22 -- common/autotest_common.sh@875 -- # return 0 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:49.451 00:43:22 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:49.451 00:43:22 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:49.451 00:43:22 -- bdev/bdev_raid.sh@678 -- # continue 00:23:49.451 00:43:22 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:49.451 00:43:22 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:49.451 00:43:22 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@12 -- # local i 00:23:49.451 00:43:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:49.452 00:43:22 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:49.452 00:43:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:49.710 /dev/nbd1 00:23:49.710 00:43:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:49.710 00:43:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:49.710 00:43:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:49.710 00:43:23 -- common/autotest_common.sh@855 -- # local i 00:23:49.710 00:43:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:49.710 00:43:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:49.710 00:43:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:49.710 00:43:23 -- common/autotest_common.sh@859 -- # break 00:23:49.710 00:43:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:49.710 00:43:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:49.710 00:43:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:49.710 1+0 records in 00:23:49.710 1+0 records out 00:23:49.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286154 s, 14.3 MB/s 00:23:49.710 00:43:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.710 00:43:23 -- common/autotest_common.sh@872 -- # size=4096 00:23:49.710 00:43:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:49.710 00:43:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:49.710 00:43:23 -- common/autotest_common.sh@875 -- # return 0 00:23:49.710 00:43:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:49.710 00:43:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:49.710 00:43:23 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:49.969 00:43:23 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:49.969 00:43:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:49.969 00:43:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:49.969 00:43:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:49.969 00:43:23 -- bdev/nbd_common.sh@51 -- # local i 00:23:49.969 00:43:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:49.969 00:43:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@41 -- # break 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@45 -- # return 0 00:23:50.253 00:43:23 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:50.253 00:43:23 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:50.253 00:43:23 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@12 -- # local i 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.253 00:43:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:50.512 /dev/nbd1 00:23:50.512 00:43:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:50.512 00:43:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:50.512 00:43:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:50.512 00:43:23 -- common/autotest_common.sh@855 -- # local i 00:23:50.512 00:43:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:50.512 00:43:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:50.512 00:43:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:50.512 00:43:23 -- common/autotest_common.sh@859 -- # break 00:23:50.512 00:43:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:50.512 00:43:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:50.512 00:43:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:50.512 1+0 records in 00:23:50.512 1+0 records out 00:23:50.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370752 s, 11.0 MB/s 00:23:50.513 00:43:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:50.513 00:43:23 -- common/autotest_common.sh@872 -- # size=4096 00:23:50.513 00:43:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:50.513 00:43:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:50.513 00:43:23 -- common/autotest_common.sh@875 -- # return 0 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:50.513 00:43:23 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:50.513 00:43:23 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@51 -- # local i 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:50.513 00:43:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@41 -- # break 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@45 -- # return 0 00:23:50.772 00:43:24 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@51 -- # local i 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:50.772 00:43:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@41 -- # break 00:23:51.031 00:43:24 -- bdev/nbd_common.sh@45 -- # return 0 00:23:51.031 00:43:24 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:51.031 00:43:24 -- bdev/bdev_raid.sh@709 -- # killprocess 133630 00:23:51.031 00:43:24 -- common/autotest_common.sh@936 -- # '[' -z 133630 ']' 00:23:51.031 00:43:24 -- common/autotest_common.sh@940 -- # kill -0 133630 00:23:51.031 00:43:24 -- common/autotest_common.sh@941 -- # uname 00:23:51.031 00:43:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:51.031 00:43:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 133630 00:23:51.031 00:43:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:51.031 00:43:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:51.031 00:43:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 133630' 00:23:51.031 killing process with pid 133630 00:23:51.031 00:43:24 -- common/autotest_common.sh@955 -- # kill 133630 00:23:51.031 Received shutdown signal, test time was about 14.111710 seconds 00:23:51.031 00:23:51.031 Latency(us) 00:23:51.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.031 =================================================================================================================== 00:23:51.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:51.031 [2024-04-27 00:43:24.546185] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:51.031 00:43:24 -- common/autotest_common.sh@960 -- # wait 133630 00:23:51.290 [2024-04-27 00:43:24.868406] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:52.694 ************************************ 00:23:52.694 END TEST raid_rebuild_test_io 00:23:52.694 ************************************ 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:52.694 00:23:52.694 real 0m19.801s 00:23:52.694 user 0m31.019s 00:23:52.694 sys 0m2.263s 00:23:52.694 00:43:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:52.694 00:43:25 -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:23:52.694 00:43:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:52.694 00:43:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:52.694 00:43:25 -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 ************************************ 00:23:52.694 START TEST raid_rebuild_test_sb_io 00:23:52.694 ************************************ 00:23:52.694 00:43:25 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid1 4 true true 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@544 -- # raid_pid=134160 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:52.694 00:43:25 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134160 /var/tmp/spdk-raid.sock 00:23:52.694 00:43:25 -- common/autotest_common.sh@817 -- # '[' -z 134160 ']' 00:23:52.694 00:43:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:52.694 00:43:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:52.694 00:43:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:52.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:52.694 00:43:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:52.694 00:43:25 -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 [2024-04-27 00:43:26.051766] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:23:52.694 [2024-04-27 00:43:26.051962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134160 ] 00:23:52.694 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:52.694 Zero copy mechanism will not be used. 00:23:52.694 [2024-04-27 00:43:26.205983] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.953 [2024-04-27 00:43:26.398198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.212 [2024-04-27 00:43:26.570401] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:53.471 00:43:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:53.471 00:43:27 -- common/autotest_common.sh@850 -- # return 0 00:23:53.471 00:43:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:53.471 00:43:27 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:53.471 00:43:27 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:53.730 BaseBdev1_malloc 00:23:53.730 00:43:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:53.991 [2024-04-27 00:43:27.494015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:53.991 [2024-04-27 00:43:27.494157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.991 [2024-04-27 00:43:27.494218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:53.991 [2024-04-27 00:43:27.494263] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.991 [2024-04-27 00:43:27.496878] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.991 [2024-04-27 00:43:27.496944] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:53.991 BaseBdev1 00:23:53.991 00:43:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:53.991 00:43:27 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:53.991 00:43:27 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:54.250 BaseBdev2_malloc 00:23:54.250 00:43:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:54.509 [2024-04-27 00:43:28.017082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:54.509 [2024-04-27 00:43:28.017198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.509 [2024-04-27 00:43:28.017245] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:54.509 [2024-04-27 00:43:28.017296] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.509 [2024-04-27 00:43:28.019769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.509 [2024-04-27 00:43:28.019834] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:54.509 BaseBdev2 00:23:54.509 00:43:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:54.509 00:43:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:54.509 00:43:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:54.767 BaseBdev3_malloc 00:23:54.767 00:43:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:55.026 [2024-04-27 00:43:28.476402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:55.026 [2024-04-27 00:43:28.476516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.026 [2024-04-27 00:43:28.476560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:55.026 [2024-04-27 00:43:28.476603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.026 [2024-04-27 00:43:28.479050] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.026 [2024-04-27 00:43:28.479135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:55.026 BaseBdev3 00:23:55.026 00:43:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:55.026 00:43:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:55.026 00:43:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:55.284 BaseBdev4_malloc 00:23:55.284 00:43:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:55.542 [2024-04-27 00:43:28.939555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:55.542 [2024-04-27 00:43:28.939663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.542 [2024-04-27 00:43:28.939702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:55.542 [2024-04-27 00:43:28.939753] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.542 [2024-04-27 00:43:28.942095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.542 [2024-04-27 00:43:28.942178] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:55.542 BaseBdev4 00:23:55.542 00:43:28 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:55.801 spare_malloc 00:23:55.801 00:43:29 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:56.059 spare_delay 00:23:56.059 00:43:29 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:56.059 [2024-04-27 00:43:29.615442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.059 [2024-04-27 00:43:29.615560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.059 [2024-04-27 00:43:29.615594] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:56.060 [2024-04-27 00:43:29.615639] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.060 [2024-04-27 00:43:29.618196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.060 [2024-04-27 00:43:29.618290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.060 spare 00:23:56.060 00:43:29 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:56.318 [2024-04-27 00:43:29.867660] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.318 [2024-04-27 00:43:29.869946] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.318 [2024-04-27 00:43:29.870057] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:56.318 [2024-04-27 00:43:29.870121] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:56.318 [2024-04-27 00:43:29.870384] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:23:56.318 [2024-04-27 00:43:29.870408] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:56.318 [2024-04-27 00:43:29.870558] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:56.318 [2024-04-27 00:43:29.870980] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:23:56.318 [2024-04-27 00:43:29.871006] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:23:56.318 [2024-04-27 00:43:29.871186] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.318 00:43:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.577 00:43:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:56.578 "name": "raid_bdev1", 00:23:56.578 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:23:56.578 "strip_size_kb": 0, 00:23:56.578 "state": "online", 00:23:56.578 "raid_level": "raid1", 00:23:56.578 "superblock": true, 00:23:56.578 "num_base_bdevs": 4, 00:23:56.578 "num_base_bdevs_discovered": 4, 00:23:56.578 "num_base_bdevs_operational": 4, 00:23:56.578 "base_bdevs_list": [ 00:23:56.578 { 00:23:56.578 "name": "BaseBdev1", 00:23:56.578 "uuid": "8b869536-f377-5a3c-841a-5ab7b786a66a", 00:23:56.578 "is_configured": true, 00:23:56.578 "data_offset": 2048, 00:23:56.578 "data_size": 63488 00:23:56.578 }, 00:23:56.578 { 00:23:56.578 "name": "BaseBdev2", 00:23:56.578 "uuid": "d25d060d-6a29-5822-a064-13f5bc352bf8", 00:23:56.578 "is_configured": true, 00:23:56.578 "data_offset": 2048, 00:23:56.578 "data_size": 63488 00:23:56.578 }, 00:23:56.578 { 00:23:56.578 "name": "BaseBdev3", 00:23:56.578 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:23:56.578 "is_configured": true, 00:23:56.578 "data_offset": 2048, 00:23:56.578 "data_size": 63488 00:23:56.578 }, 00:23:56.578 { 00:23:56.578 "name": "BaseBdev4", 00:23:56.578 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:23:56.578 "is_configured": true, 00:23:56.578 "data_offset": 2048, 00:23:56.578 "data_size": 63488 00:23:56.578 } 00:23:56.578 ] 00:23:56.578 }' 00:23:56.578 00:43:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:56.578 00:43:30 -- common/autotest_common.sh@10 -- # set +x 00:23:57.512 00:43:30 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:57.512 00:43:30 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:57.512 [2024-04-27 00:43:30.980102] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.512 00:43:30 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:57.512 00:43:30 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.512 00:43:30 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:57.770 00:43:31 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:57.770 00:43:31 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:57.770 00:43:31 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:57.770 00:43:31 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:58.029 [2024-04-27 00:43:31.371542] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:58.029 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:58.029 Zero copy mechanism will not be used. 00:23:58.029 Running I/O for 60 seconds... 00:23:58.029 [2024-04-27 00:43:31.516233] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:58.029 [2024-04-27 00:43:31.522454] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.029 00:43:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.287 00:43:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:58.287 "name": "raid_bdev1", 00:23:58.287 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:23:58.287 "strip_size_kb": 0, 00:23:58.287 "state": "online", 00:23:58.287 "raid_level": "raid1", 00:23:58.287 "superblock": true, 00:23:58.287 "num_base_bdevs": 4, 00:23:58.287 "num_base_bdevs_discovered": 3, 00:23:58.287 "num_base_bdevs_operational": 3, 00:23:58.287 "base_bdevs_list": [ 00:23:58.287 { 00:23:58.287 "name": null, 00:23:58.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.287 "is_configured": false, 00:23:58.287 "data_offset": 2048, 00:23:58.287 "data_size": 63488 00:23:58.287 }, 00:23:58.287 { 00:23:58.287 "name": "BaseBdev2", 00:23:58.287 "uuid": "d25d060d-6a29-5822-a064-13f5bc352bf8", 00:23:58.287 "is_configured": true, 00:23:58.287 "data_offset": 2048, 00:23:58.287 "data_size": 63488 00:23:58.287 }, 00:23:58.287 { 00:23:58.287 "name": "BaseBdev3", 00:23:58.287 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:23:58.287 "is_configured": true, 00:23:58.287 "data_offset": 2048, 00:23:58.287 "data_size": 63488 00:23:58.287 }, 00:23:58.287 { 00:23:58.287 "name": "BaseBdev4", 00:23:58.287 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:23:58.287 "is_configured": true, 00:23:58.287 "data_offset": 2048, 00:23:58.287 "data_size": 63488 00:23:58.287 } 00:23:58.287 ] 00:23:58.287 }' 00:23:58.287 00:43:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:58.288 00:43:31 -- common/autotest_common.sh@10 -- # set +x 00:23:58.879 00:43:32 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:59.138 [2024-04-27 00:43:32.692837] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:59.138 [2024-04-27 00:43:32.692914] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:59.397 00:43:32 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:59.397 [2024-04-27 00:43:32.753284] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:59.397 [2024-04-27 00:43:32.755454] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:59.397 [2024-04-27 00:43:32.879458] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:59.656 [2024-04-27 00:43:33.113712] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:59.915 [2024-04-27 00:43:33.363220] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:00.174 [2024-04-27 00:43:33.503266] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:00.174 00:43:33 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.174 00:43:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.174 00:43:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:00.174 00:43:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:00.174 00:43:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.174 00:43:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.174 00:43:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.432 [2024-04-27 00:43:33.855647] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:00.432 [2024-04-27 00:43:33.856722] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:24:00.432 00:43:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.432 "name": "raid_bdev1", 00:24:00.432 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:00.432 "strip_size_kb": 0, 00:24:00.432 "state": "online", 00:24:00.432 "raid_level": "raid1", 00:24:00.432 "superblock": true, 00:24:00.432 "num_base_bdevs": 4, 00:24:00.432 "num_base_bdevs_discovered": 4, 00:24:00.432 "num_base_bdevs_operational": 4, 00:24:00.432 "process": { 00:24:00.432 "type": "rebuild", 00:24:00.432 "target": "spare", 00:24:00.432 "progress": { 00:24:00.432 "blocks": 14336, 00:24:00.432 "percent": 22 00:24:00.432 } 00:24:00.432 }, 00:24:00.432 "base_bdevs_list": [ 00:24:00.432 { 00:24:00.432 "name": "spare", 00:24:00.432 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:00.432 "is_configured": true, 00:24:00.432 "data_offset": 2048, 00:24:00.432 "data_size": 63488 00:24:00.432 }, 00:24:00.432 { 00:24:00.432 "name": "BaseBdev2", 00:24:00.432 "uuid": "d25d060d-6a29-5822-a064-13f5bc352bf8", 00:24:00.432 "is_configured": true, 00:24:00.432 "data_offset": 2048, 00:24:00.432 "data_size": 63488 00:24:00.432 }, 00:24:00.432 { 00:24:00.432 "name": "BaseBdev3", 00:24:00.432 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:00.432 "is_configured": true, 00:24:00.432 "data_offset": 2048, 00:24:00.432 "data_size": 63488 00:24:00.432 }, 00:24:00.432 { 00:24:00.432 "name": "BaseBdev4", 00:24:00.432 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:00.432 "is_configured": true, 00:24:00.432 "data_offset": 2048, 00:24:00.432 "data_size": 63488 00:24:00.432 } 00:24:00.432 ] 00:24:00.432 }' 00:24:00.691 00:43:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.691 [2024-04-27 00:43:34.068882] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:00.691 00:43:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.691 00:43:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:00.691 00:43:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.691 00:43:34 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:00.950 [2024-04-27 00:43:34.351244] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.950 [2024-04-27 00:43:34.529889] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:00.950 [2024-04-27 00:43:34.533480] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:01.208 [2024-04-27 00:43:34.554082] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.208 00:43:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.466 00:43:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:01.466 "name": "raid_bdev1", 00:24:01.466 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:01.466 "strip_size_kb": 0, 00:24:01.466 "state": "online", 00:24:01.466 "raid_level": "raid1", 00:24:01.466 "superblock": true, 00:24:01.466 "num_base_bdevs": 4, 00:24:01.466 "num_base_bdevs_discovered": 3, 00:24:01.466 "num_base_bdevs_operational": 3, 00:24:01.466 "base_bdevs_list": [ 00:24:01.466 { 00:24:01.466 "name": null, 00:24:01.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.466 "is_configured": false, 00:24:01.466 "data_offset": 2048, 00:24:01.466 "data_size": 63488 00:24:01.466 }, 00:24:01.466 { 00:24:01.466 "name": "BaseBdev2", 00:24:01.466 "uuid": "d25d060d-6a29-5822-a064-13f5bc352bf8", 00:24:01.466 "is_configured": true, 00:24:01.466 "data_offset": 2048, 00:24:01.466 "data_size": 63488 00:24:01.466 }, 00:24:01.466 { 00:24:01.466 "name": "BaseBdev3", 00:24:01.466 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:01.466 "is_configured": true, 00:24:01.466 "data_offset": 2048, 00:24:01.466 "data_size": 63488 00:24:01.466 }, 00:24:01.466 { 00:24:01.466 "name": "BaseBdev4", 00:24:01.466 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:01.466 "is_configured": true, 00:24:01.466 "data_offset": 2048, 00:24:01.466 "data_size": 63488 00:24:01.466 } 00:24:01.466 ] 00:24:01.466 }' 00:24:01.466 00:43:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:01.467 00:43:34 -- common/autotest_common.sh@10 -- # set +x 00:24:02.035 00:43:35 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:02.035 00:43:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.035 00:43:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:02.035 00:43:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:02.035 00:43:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.035 00:43:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.035 00:43:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.294 00:43:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.294 "name": "raid_bdev1", 00:24:02.294 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:02.294 "strip_size_kb": 0, 00:24:02.294 "state": "online", 00:24:02.294 "raid_level": "raid1", 00:24:02.294 "superblock": true, 00:24:02.294 "num_base_bdevs": 4, 00:24:02.294 "num_base_bdevs_discovered": 3, 00:24:02.294 "num_base_bdevs_operational": 3, 00:24:02.294 "base_bdevs_list": [ 00:24:02.294 { 00:24:02.294 "name": null, 00:24:02.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:02.294 "is_configured": false, 00:24:02.294 "data_offset": 2048, 00:24:02.294 "data_size": 63488 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "name": "BaseBdev2", 00:24:02.294 "uuid": "d25d060d-6a29-5822-a064-13f5bc352bf8", 00:24:02.294 "is_configured": true, 00:24:02.294 "data_offset": 2048, 00:24:02.294 "data_size": 63488 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "name": "BaseBdev3", 00:24:02.294 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:02.294 "is_configured": true, 00:24:02.294 "data_offset": 2048, 00:24:02.294 "data_size": 63488 00:24:02.294 }, 00:24:02.294 { 00:24:02.294 "name": "BaseBdev4", 00:24:02.294 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:02.294 "is_configured": true, 00:24:02.294 "data_offset": 2048, 00:24:02.294 "data_size": 63488 00:24:02.294 } 00:24:02.294 ] 00:24:02.294 }' 00:24:02.294 00:43:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.294 00:43:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:02.294 00:43:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.294 00:43:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:02.294 00:43:35 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:02.553 [2024-04-27 00:43:36.067505] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:02.553 [2024-04-27 00:43:36.067562] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:02.553 00:43:36 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:02.553 [2024-04-27 00:43:36.120918] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:02.553 [2024-04-27 00:43:36.123173] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:02.812 [2024-04-27 00:43:36.249249] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:02.813 [2024-04-27 00:43:36.250527] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:24:03.072 [2024-04-27 00:43:36.501131] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:03.072 [2024-04-27 00:43:36.501788] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:24:03.331 [2024-04-27 00:43:36.839722] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:24:03.590 [2024-04-27 00:43:36.951364] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:24:03.590 00:43:37 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.590 00:43:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.590 00:43:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:03.590 00:43:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:03.590 00:43:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.590 00:43:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.590 00:43:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.849 [2024-04-27 00:43:37.275919] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:24:03.849 00:43:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:03.849 "name": "raid_bdev1", 00:24:03.849 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:03.849 "strip_size_kb": 0, 00:24:03.849 "state": "online", 00:24:03.849 "raid_level": "raid1", 00:24:03.849 "superblock": true, 00:24:03.849 "num_base_bdevs": 4, 00:24:03.849 "num_base_bdevs_discovered": 4, 00:24:03.849 "num_base_bdevs_operational": 4, 00:24:03.849 "process": { 00:24:03.849 "type": "rebuild", 00:24:03.849 "target": "spare", 00:24:03.849 "progress": { 00:24:03.849 "blocks": 16384, 00:24:03.849 "percent": 25 00:24:03.849 } 00:24:03.849 }, 00:24:03.849 "base_bdevs_list": [ 00:24:03.849 { 00:24:03.849 "name": "spare", 00:24:03.849 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:03.849 "is_configured": true, 00:24:03.849 "data_offset": 2048, 00:24:03.849 "data_size": 63488 00:24:03.849 }, 00:24:03.849 { 00:24:03.849 "name": "BaseBdev2", 00:24:03.849 "uuid": "d25d060d-6a29-5822-a064-13f5bc352bf8", 00:24:03.849 "is_configured": true, 00:24:03.849 "data_offset": 2048, 00:24:03.849 "data_size": 63488 00:24:03.849 }, 00:24:03.849 { 00:24:03.849 "name": "BaseBdev3", 00:24:03.849 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:03.849 "is_configured": true, 00:24:03.849 "data_offset": 2048, 00:24:03.849 "data_size": 63488 00:24:03.849 }, 00:24:03.849 { 00:24:03.849 "name": "BaseBdev4", 00:24:03.849 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:03.849 "is_configured": true, 00:24:03.849 "data_offset": 2048, 00:24:03.849 "data_size": 63488 00:24:03.849 } 00:24:03.849 ] 00:24:03.849 }' 00:24:03.849 00:43:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:03.849 00:43:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.849 00:43:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.108 00:43:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.108 00:43:37 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:04.108 00:43:37 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:04.108 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:04.108 00:43:37 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:04.108 00:43:37 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:24:04.108 00:43:37 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:24:04.108 00:43:37 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:04.367 [2024-04-27 00:43:37.720468] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:04.367 [2024-04-27 00:43:37.784034] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:04.367 [2024-04-27 00:43:37.895610] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:24:04.367 [2024-04-27 00:43:37.895647] bdev_raid.c:1964:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:24:04.367 [2024-04-27 00:43:37.904447] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.627 00:43:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.890 "name": "raid_bdev1", 00:24:04.890 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:04.890 "strip_size_kb": 0, 00:24:04.890 "state": "online", 00:24:04.890 "raid_level": "raid1", 00:24:04.890 "superblock": true, 00:24:04.890 "num_base_bdevs": 4, 00:24:04.890 "num_base_bdevs_discovered": 3, 00:24:04.890 "num_base_bdevs_operational": 3, 00:24:04.890 "process": { 00:24:04.890 "type": "rebuild", 00:24:04.890 "target": "spare", 00:24:04.890 "progress": { 00:24:04.890 "blocks": 24576, 00:24:04.890 "percent": 38 00:24:04.890 } 00:24:04.890 }, 00:24:04.890 "base_bdevs_list": [ 00:24:04.890 { 00:24:04.890 "name": "spare", 00:24:04.890 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:04.890 "is_configured": true, 00:24:04.890 "data_offset": 2048, 00:24:04.890 "data_size": 63488 00:24:04.890 }, 00:24:04.890 { 00:24:04.890 "name": null, 00:24:04.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.890 "is_configured": false, 00:24:04.890 "data_offset": 2048, 00:24:04.890 "data_size": 63488 00:24:04.890 }, 00:24:04.890 { 00:24:04.890 "name": "BaseBdev3", 00:24:04.890 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:04.890 "is_configured": true, 00:24:04.890 "data_offset": 2048, 00:24:04.890 "data_size": 63488 00:24:04.890 }, 00:24:04.890 { 00:24:04.890 "name": "BaseBdev4", 00:24:04.890 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:04.890 "is_configured": true, 00:24:04.890 "data_offset": 2048, 00:24:04.890 "data_size": 63488 00:24:04.890 } 00:24:04.890 ] 00:24:04.890 }' 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.890 [2024-04-27 00:43:38.377493] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@657 -- # local timeout=568 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.890 00:43:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.158 00:43:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.158 "name": "raid_bdev1", 00:24:05.158 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:05.158 "strip_size_kb": 0, 00:24:05.158 "state": "online", 00:24:05.158 "raid_level": "raid1", 00:24:05.158 "superblock": true, 00:24:05.158 "num_base_bdevs": 4, 00:24:05.158 "num_base_bdevs_discovered": 3, 00:24:05.158 "num_base_bdevs_operational": 3, 00:24:05.158 "process": { 00:24:05.158 "type": "rebuild", 00:24:05.158 "target": "spare", 00:24:05.158 "progress": { 00:24:05.158 "blocks": 30720, 00:24:05.158 "percent": 48 00:24:05.158 } 00:24:05.158 }, 00:24:05.158 "base_bdevs_list": [ 00:24:05.158 { 00:24:05.158 "name": "spare", 00:24:05.158 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:05.158 "is_configured": true, 00:24:05.158 "data_offset": 2048, 00:24:05.158 "data_size": 63488 00:24:05.158 }, 00:24:05.158 { 00:24:05.158 "name": null, 00:24:05.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.158 "is_configured": false, 00:24:05.158 "data_offset": 2048, 00:24:05.158 "data_size": 63488 00:24:05.158 }, 00:24:05.158 { 00:24:05.158 "name": "BaseBdev3", 00:24:05.158 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:05.158 "is_configured": true, 00:24:05.158 "data_offset": 2048, 00:24:05.158 "data_size": 63488 00:24:05.158 }, 00:24:05.158 { 00:24:05.158 "name": "BaseBdev4", 00:24:05.158 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:05.158 "is_configured": true, 00:24:05.158 "data_offset": 2048, 00:24:05.158 "data_size": 63488 00:24:05.158 } 00:24:05.158 ] 00:24:05.158 }' 00:24:05.158 00:43:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.158 00:43:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.158 00:43:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.158 [2024-04-27 00:43:38.737242] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:24:05.417 00:43:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.417 00:43:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:05.417 [2024-04-27 00:43:38.957517] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:24:05.984 [2024-04-27 00:43:39.497534] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:24:06.243 [2024-04-27 00:43:39.627294] bdev_raid.c: 853:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.243 00:43:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.501 00:43:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:06.501 "name": "raid_bdev1", 00:24:06.501 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:06.501 "strip_size_kb": 0, 00:24:06.501 "state": "online", 00:24:06.501 "raid_level": "raid1", 00:24:06.501 "superblock": true, 00:24:06.501 "num_base_bdevs": 4, 00:24:06.501 "num_base_bdevs_discovered": 3, 00:24:06.501 "num_base_bdevs_operational": 3, 00:24:06.501 "process": { 00:24:06.501 "type": "rebuild", 00:24:06.501 "target": "spare", 00:24:06.501 "progress": { 00:24:06.501 "blocks": 49152, 00:24:06.501 "percent": 77 00:24:06.501 } 00:24:06.501 }, 00:24:06.501 "base_bdevs_list": [ 00:24:06.501 { 00:24:06.501 "name": "spare", 00:24:06.501 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:06.501 "is_configured": true, 00:24:06.501 "data_offset": 2048, 00:24:06.501 "data_size": 63488 00:24:06.501 }, 00:24:06.501 { 00:24:06.501 "name": null, 00:24:06.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.501 "is_configured": false, 00:24:06.501 "data_offset": 2048, 00:24:06.501 "data_size": 63488 00:24:06.501 }, 00:24:06.501 { 00:24:06.501 "name": "BaseBdev3", 00:24:06.501 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:06.501 "is_configured": true, 00:24:06.502 "data_offset": 2048, 00:24:06.502 "data_size": 63488 00:24:06.502 }, 00:24:06.502 { 00:24:06.502 "name": "BaseBdev4", 00:24:06.502 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:06.502 "is_configured": true, 00:24:06.502 "data_offset": 2048, 00:24:06.502 "data_size": 63488 00:24:06.502 } 00:24:06.502 ] 00:24:06.502 }' 00:24:06.502 00:43:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:06.502 00:43:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.502 00:43:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.760 00:43:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.760 00:43:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:07.327 [2024-04-27 00:43:40.618634] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:07.327 [2024-04-27 00:43:40.725760] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:07.327 [2024-04-27 00:43:40.729790] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.586 00:43:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.845 00:43:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.845 "name": "raid_bdev1", 00:24:07.845 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:07.845 "strip_size_kb": 0, 00:24:07.845 "state": "online", 00:24:07.845 "raid_level": "raid1", 00:24:07.845 "superblock": true, 00:24:07.845 "num_base_bdevs": 4, 00:24:07.845 "num_base_bdevs_discovered": 3, 00:24:07.845 "num_base_bdevs_operational": 3, 00:24:07.845 "base_bdevs_list": [ 00:24:07.845 { 00:24:07.845 "name": "spare", 00:24:07.845 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:07.845 "is_configured": true, 00:24:07.845 "data_offset": 2048, 00:24:07.845 "data_size": 63488 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "name": null, 00:24:07.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.845 "is_configured": false, 00:24:07.845 "data_offset": 2048, 00:24:07.845 "data_size": 63488 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "name": "BaseBdev3", 00:24:07.845 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:07.845 "is_configured": true, 00:24:07.845 "data_offset": 2048, 00:24:07.845 "data_size": 63488 00:24:07.845 }, 00:24:07.845 { 00:24:07.845 "name": "BaseBdev4", 00:24:07.845 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:07.845 "is_configured": true, 00:24:07.845 "data_offset": 2048, 00:24:07.845 "data_size": 63488 00:24:07.845 } 00:24:07.845 ] 00:24:07.845 }' 00:24:07.845 00:43:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.845 00:43:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:07.845 00:43:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@660 -- # break 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.103 00:43:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.362 00:43:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.362 "name": "raid_bdev1", 00:24:08.362 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:08.362 "strip_size_kb": 0, 00:24:08.362 "state": "online", 00:24:08.362 "raid_level": "raid1", 00:24:08.362 "superblock": true, 00:24:08.362 "num_base_bdevs": 4, 00:24:08.362 "num_base_bdevs_discovered": 3, 00:24:08.362 "num_base_bdevs_operational": 3, 00:24:08.362 "base_bdevs_list": [ 00:24:08.362 { 00:24:08.362 "name": "spare", 00:24:08.363 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:08.363 "is_configured": true, 00:24:08.363 "data_offset": 2048, 00:24:08.363 "data_size": 63488 00:24:08.363 }, 00:24:08.363 { 00:24:08.363 "name": null, 00:24:08.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.363 "is_configured": false, 00:24:08.363 "data_offset": 2048, 00:24:08.363 "data_size": 63488 00:24:08.363 }, 00:24:08.363 { 00:24:08.363 "name": "BaseBdev3", 00:24:08.363 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:08.363 "is_configured": true, 00:24:08.363 "data_offset": 2048, 00:24:08.363 "data_size": 63488 00:24:08.363 }, 00:24:08.363 { 00:24:08.363 "name": "BaseBdev4", 00:24:08.363 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:08.363 "is_configured": true, 00:24:08.363 "data_offset": 2048, 00:24:08.363 "data_size": 63488 00:24:08.363 } 00:24:08.363 ] 00:24:08.363 }' 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.363 00:43:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.621 00:43:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:08.621 "name": "raid_bdev1", 00:24:08.621 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:08.621 "strip_size_kb": 0, 00:24:08.621 "state": "online", 00:24:08.621 "raid_level": "raid1", 00:24:08.621 "superblock": true, 00:24:08.621 "num_base_bdevs": 4, 00:24:08.621 "num_base_bdevs_discovered": 3, 00:24:08.621 "num_base_bdevs_operational": 3, 00:24:08.621 "base_bdevs_list": [ 00:24:08.621 { 00:24:08.621 "name": "spare", 00:24:08.621 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:08.621 "is_configured": true, 00:24:08.621 "data_offset": 2048, 00:24:08.621 "data_size": 63488 00:24:08.621 }, 00:24:08.621 { 00:24:08.621 "name": null, 00:24:08.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.621 "is_configured": false, 00:24:08.621 "data_offset": 2048, 00:24:08.621 "data_size": 63488 00:24:08.621 }, 00:24:08.621 { 00:24:08.621 "name": "BaseBdev3", 00:24:08.621 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:08.621 "is_configured": true, 00:24:08.621 "data_offset": 2048, 00:24:08.621 "data_size": 63488 00:24:08.621 }, 00:24:08.621 { 00:24:08.621 "name": "BaseBdev4", 00:24:08.621 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:08.621 "is_configured": true, 00:24:08.621 "data_offset": 2048, 00:24:08.621 "data_size": 63488 00:24:08.621 } 00:24:08.621 ] 00:24:08.621 }' 00:24:08.621 00:43:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:08.621 00:43:42 -- common/autotest_common.sh@10 -- # set +x 00:24:09.188 00:43:42 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:09.446 [2024-04-27 00:43:42.964608] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:09.446 [2024-04-27 00:43:42.964677] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:09.446 00:24:09.447 Latency(us) 00:24:09.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.447 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:24:09.447 raid_bdev1 : 11.65 102.36 307.09 0.00 0.00 13918.13 277.41 118679.74 00:24:09.447 =================================================================================================================== 00:24:09.447 Total : 102.36 307.09 0.00 0.00 13918.13 277.41 118679.74 00:24:09.705 [2024-04-27 00:43:43.047679] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:09.705 [2024-04-27 00:43:43.047760] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:09.705 0 00:24:09.705 [2024-04-27 00:43:43.047909] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:09.705 [2024-04-27 00:43:43.047929] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:24:09.705 00:43:43 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:09.705 00:43:43 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.963 00:43:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:09.963 00:43:43 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:24:09.963 00:43:43 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@12 -- # local i 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:09.963 00:43:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:24:10.221 /dev/nbd0 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:10.221 00:43:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:10.221 00:43:43 -- common/autotest_common.sh@855 -- # local i 00:24:10.221 00:43:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:10.221 00:43:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:10.221 00:43:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:10.221 00:43:43 -- common/autotest_common.sh@859 -- # break 00:24:10.221 00:43:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:10.221 00:43:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:10.221 00:43:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.221 1+0 records in 00:24:10.221 1+0 records out 00:24:10.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000856372 s, 4.8 MB/s 00:24:10.221 00:43:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.221 00:43:43 -- common/autotest_common.sh@872 -- # size=4096 00:24:10.221 00:43:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.221 00:43:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:10.221 00:43:43 -- common/autotest_common.sh@875 -- # return 0 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.221 00:43:43 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:10.221 00:43:43 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:24:10.221 00:43:43 -- bdev/bdev_raid.sh@678 -- # continue 00:24:10.221 00:43:43 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:10.221 00:43:43 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:24:10.221 00:43:43 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@12 -- # local i 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.221 00:43:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:24:10.480 /dev/nbd1 00:24:10.480 00:43:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:10.480 00:43:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:10.480 00:43:43 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:10.480 00:43:43 -- common/autotest_common.sh@855 -- # local i 00:24:10.480 00:43:43 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:10.480 00:43:43 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:10.480 00:43:43 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:10.480 00:43:43 -- common/autotest_common.sh@859 -- # break 00:24:10.480 00:43:43 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:10.480 00:43:43 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:10.480 00:43:43 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:10.480 1+0 records in 00:24:10.480 1+0 records out 00:24:10.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00097463 s, 4.2 MB/s 00:24:10.480 00:43:43 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.480 00:43:43 -- common/autotest_common.sh@872 -- # size=4096 00:24:10.480 00:43:43 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:10.480 00:43:43 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:10.480 00:43:43 -- common/autotest_common.sh@875 -- # return 0 00:24:10.480 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:10.480 00:43:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.480 00:43:43 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:10.738 00:43:44 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:10.738 00:43:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.738 00:43:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:10.738 00:43:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:10.738 00:43:44 -- bdev/nbd_common.sh@51 -- # local i 00:24:10.738 00:43:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:10.738 00:43:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:10.996 00:43:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@41 -- # break 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@45 -- # return 0 00:24:10.997 00:43:44 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:24:10.997 00:43:44 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:24:10.997 00:43:44 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@12 -- # local i 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:10.997 00:43:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:24:11.255 /dev/nbd1 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:11.255 00:43:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:11.255 00:43:44 -- common/autotest_common.sh@855 -- # local i 00:24:11.255 00:43:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:11.255 00:43:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:11.255 00:43:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:11.255 00:43:44 -- common/autotest_common.sh@859 -- # break 00:24:11.255 00:43:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:11.255 00:43:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:11.255 00:43:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:11.255 1+0 records in 00:24:11.255 1+0 records out 00:24:11.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619035 s, 6.6 MB/s 00:24:11.255 00:43:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:11.255 00:43:44 -- common/autotest_common.sh@872 -- # size=4096 00:24:11.255 00:43:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:11.255 00:43:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:11.255 00:43:44 -- common/autotest_common.sh@875 -- # return 0 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:11.255 00:43:44 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:11.255 00:43:44 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@51 -- # local i 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.255 00:43:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@41 -- # break 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.513 00:43:45 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@51 -- # local i 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:11.513 00:43:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@41 -- # break 00:24:11.771 00:43:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:11.771 00:43:45 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:11.771 00:43:45 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:11.771 00:43:45 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:11.771 00:43:45 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:12.029 00:43:45 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:12.287 [2024-04-27 00:43:45.768524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:12.287 [2024-04-27 00:43:45.768667] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.287 [2024-04-27 00:43:45.768723] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:12.287 [2024-04-27 00:43:45.768759] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.287 [2024-04-27 00:43:45.771706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.287 [2024-04-27 00:43:45.771792] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:12.287 [2024-04-27 00:43:45.771978] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:12.287 [2024-04-27 00:43:45.772040] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:12.287 BaseBdev1 00:24:12.287 00:43:45 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:12.287 00:43:45 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:24:12.287 00:43:45 -- bdev/bdev_raid.sh@696 -- # continue 00:24:12.287 00:43:45 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:12.287 00:43:45 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:12.287 00:43:45 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:12.546 00:43:45 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:12.804 [2024-04-27 00:43:46.232683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:12.804 [2024-04-27 00:43:46.232854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.804 [2024-04-27 00:43:46.232912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:12.804 [2024-04-27 00:43:46.232947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.804 [2024-04-27 00:43:46.233616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.804 [2024-04-27 00:43:46.233718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.804 [2024-04-27 00:43:46.233907] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:12.804 [2024-04-27 00:43:46.233928] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:24:12.804 [2024-04-27 00:43:46.233936] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:12.804 [2024-04-27 00:43:46.233980] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:24:12.804 [2024-04-27 00:43:46.234065] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:12.804 BaseBdev3 00:24:12.804 00:43:46 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:12.804 00:43:46 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:12.804 00:43:46 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:24:13.063 00:43:46 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:13.322 [2024-04-27 00:43:46.700804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:13.322 [2024-04-27 00:43:46.700922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.322 [2024-04-27 00:43:46.700985] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:13.322 [2024-04-27 00:43:46.701073] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.322 [2024-04-27 00:43:46.701704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.322 [2024-04-27 00:43:46.701781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:13.323 [2024-04-27 00:43:46.701950] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:24:13.323 [2024-04-27 00:43:46.701997] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:13.323 BaseBdev4 00:24:13.323 00:43:46 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:13.581 00:43:46 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:13.581 [2024-04-27 00:43:47.109011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.581 [2024-04-27 00:43:47.109155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.581 [2024-04-27 00:43:47.109208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:24:13.581 [2024-04-27 00:43:47.109245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.581 [2024-04-27 00:43:47.109998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.581 [2024-04-27 00:43:47.110121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.581 [2024-04-27 00:43:47.110284] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:13.581 [2024-04-27 00:43:47.110335] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:13.581 spare 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.581 00:43:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.839 [2024-04-27 00:43:47.210532] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:13.839 [2024-04-27 00:43:47.210568] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:13.839 [2024-04-27 00:43:47.210797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000373d0 00:24:13.839 [2024-04-27 00:43:47.211319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:13.839 [2024-04-27 00:43:47.211346] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:24:13.839 [2024-04-27 00:43:47.211594] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.839 00:43:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:13.839 "name": "raid_bdev1", 00:24:13.839 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:13.839 "strip_size_kb": 0, 00:24:13.839 "state": "online", 00:24:13.839 "raid_level": "raid1", 00:24:13.839 "superblock": true, 00:24:13.839 "num_base_bdevs": 4, 00:24:13.839 "num_base_bdevs_discovered": 3, 00:24:13.839 "num_base_bdevs_operational": 3, 00:24:13.839 "base_bdevs_list": [ 00:24:13.839 { 00:24:13.839 "name": "spare", 00:24:13.839 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:13.839 "is_configured": true, 00:24:13.839 "data_offset": 2048, 00:24:13.839 "data_size": 63488 00:24:13.839 }, 00:24:13.839 { 00:24:13.839 "name": null, 00:24:13.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.839 "is_configured": false, 00:24:13.839 "data_offset": 2048, 00:24:13.839 "data_size": 63488 00:24:13.839 }, 00:24:13.839 { 00:24:13.839 "name": "BaseBdev3", 00:24:13.839 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:13.839 "is_configured": true, 00:24:13.839 "data_offset": 2048, 00:24:13.839 "data_size": 63488 00:24:13.839 }, 00:24:13.839 { 00:24:13.839 "name": "BaseBdev4", 00:24:13.839 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:13.840 "is_configured": true, 00:24:13.840 "data_offset": 2048, 00:24:13.840 "data_size": 63488 00:24:13.840 } 00:24:13.840 ] 00:24:13.840 }' 00:24:13.840 00:43:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:13.840 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:24:14.407 00:43:47 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.407 00:43:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:14.407 00:43:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:14.407 00:43:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:14.407 00:43:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:14.407 00:43:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.407 00:43:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.666 00:43:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:14.666 "name": "raid_bdev1", 00:24:14.666 "uuid": "d6667030-01e8-4b01-a387-ff1d48118f58", 00:24:14.666 "strip_size_kb": 0, 00:24:14.666 "state": "online", 00:24:14.666 "raid_level": "raid1", 00:24:14.666 "superblock": true, 00:24:14.666 "num_base_bdevs": 4, 00:24:14.666 "num_base_bdevs_discovered": 3, 00:24:14.666 "num_base_bdevs_operational": 3, 00:24:14.666 "base_bdevs_list": [ 00:24:14.666 { 00:24:14.666 "name": "spare", 00:24:14.666 "uuid": "50462cd7-c458-598b-85fd-a13a2020f803", 00:24:14.666 "is_configured": true, 00:24:14.666 "data_offset": 2048, 00:24:14.666 "data_size": 63488 00:24:14.666 }, 00:24:14.666 { 00:24:14.666 "name": null, 00:24:14.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.666 "is_configured": false, 00:24:14.666 "data_offset": 2048, 00:24:14.666 "data_size": 63488 00:24:14.666 }, 00:24:14.666 { 00:24:14.666 "name": "BaseBdev3", 00:24:14.666 "uuid": "c93415f1-b9c7-51aa-872c-d81a6b0d19e5", 00:24:14.666 "is_configured": true, 00:24:14.666 "data_offset": 2048, 00:24:14.666 "data_size": 63488 00:24:14.666 }, 00:24:14.666 { 00:24:14.666 "name": "BaseBdev4", 00:24:14.666 "uuid": "32ef957a-00aa-5f0d-b79c-9ded9d588fbc", 00:24:14.666 "is_configured": true, 00:24:14.666 "data_offset": 2048, 00:24:14.666 "data_size": 63488 00:24:14.666 } 00:24:14.666 ] 00:24:14.666 }' 00:24:14.666 00:43:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:14.666 00:43:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:14.666 00:43:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:14.925 00:43:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:14.925 00:43:48 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.925 00:43:48 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:15.183 00:43:48 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.183 00:43:48 -- bdev/bdev_raid.sh@709 -- # killprocess 134160 00:24:15.183 00:43:48 -- common/autotest_common.sh@936 -- # '[' -z 134160 ']' 00:24:15.183 00:43:48 -- common/autotest_common.sh@940 -- # kill -0 134160 00:24:15.183 00:43:48 -- common/autotest_common.sh@941 -- # uname 00:24:15.183 00:43:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:15.183 00:43:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134160 00:24:15.184 00:43:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:15.184 00:43:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:15.184 00:43:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134160' 00:24:15.184 killing process with pid 134160 00:24:15.184 00:43:48 -- common/autotest_common.sh@955 -- # kill 134160 00:24:15.184 Received shutdown signal, test time was about 17.223875 seconds 00:24:15.184 00:24:15.184 Latency(us) 00:24:15.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.184 =================================================================================================================== 00:24:15.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.184 00:43:48 -- common/autotest_common.sh@960 -- # wait 134160 00:24:15.184 [2024-04-27 00:43:48.597794] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:15.184 [2024-04-27 00:43:48.597910] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:15.184 [2024-04-27 00:43:48.598038] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:15.184 [2024-04-27 00:43:48.598065] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:24:15.442 [2024-04-27 00:43:48.937530] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:16.819 ************************************ 00:24:16.819 END TEST raid_rebuild_test_sb_io 00:24:16.819 ************************************ 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:16.819 00:24:16.819 real 0m24.105s 00:24:16.819 user 0m38.920s 00:24:16.819 sys 0m2.845s 00:24:16.819 00:43:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:16.819 00:43:50 -- common/autotest_common.sh@10 -- # set +x 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:24:16.819 00:43:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:16.819 00:43:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.819 00:43:50 -- common/autotest_common.sh@10 -- # set +x 00:24:16.819 ************************************ 00:24:16.819 START TEST raid5f_state_function_test 00:24:16.819 ************************************ 00:24:16.819 00:43:50 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 false 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=134786 00:24:16.819 Process raid pid: 134786 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 134786' 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 134786 /var/tmp/spdk-raid.sock 00:24:16.819 00:43:50 -- common/autotest_common.sh@817 -- # '[' -z 134786 ']' 00:24:16.819 00:43:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:16.819 00:43:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:16.819 00:43:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:16.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:16.819 00:43:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:16.819 00:43:50 -- common/autotest_common.sh@10 -- # set +x 00:24:16.819 00:43:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:16.819 [2024-04-27 00:43:50.268779] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:24:16.819 [2024-04-27 00:43:50.268990] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.077 [2024-04-27 00:43:50.442575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.335 [2024-04-27 00:43:50.676317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.335 [2024-04-27 00:43:50.875091] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:17.901 00:43:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:17.901 00:43:51 -- common/autotest_common.sh@850 -- # return 0 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:17.901 [2024-04-27 00:43:51.423968] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:17.901 [2024-04-27 00:43:51.424113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:17.901 [2024-04-27 00:43:51.424131] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:17.901 [2024-04-27 00:43:51.424156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:17.901 [2024-04-27 00:43:51.424165] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:17.901 [2024-04-27 00:43:51.424215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.901 00:43:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.160 00:43:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.160 "name": "Existed_Raid", 00:24:18.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.160 "strip_size_kb": 64, 00:24:18.160 "state": "configuring", 00:24:18.160 "raid_level": "raid5f", 00:24:18.160 "superblock": false, 00:24:18.160 "num_base_bdevs": 3, 00:24:18.160 "num_base_bdevs_discovered": 0, 00:24:18.160 "num_base_bdevs_operational": 3, 00:24:18.160 "base_bdevs_list": [ 00:24:18.160 { 00:24:18.160 "name": "BaseBdev1", 00:24:18.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.160 "is_configured": false, 00:24:18.160 "data_offset": 0, 00:24:18.160 "data_size": 0 00:24:18.160 }, 00:24:18.160 { 00:24:18.160 "name": "BaseBdev2", 00:24:18.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.160 "is_configured": false, 00:24:18.160 "data_offset": 0, 00:24:18.160 "data_size": 0 00:24:18.160 }, 00:24:18.160 { 00:24:18.160 "name": "BaseBdev3", 00:24:18.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.160 "is_configured": false, 00:24:18.160 "data_offset": 0, 00:24:18.160 "data_size": 0 00:24:18.160 } 00:24:18.160 ] 00:24:18.160 }' 00:24:18.160 00:43:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.160 00:43:51 -- common/autotest_common.sh@10 -- # set +x 00:24:18.725 00:43:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:18.983 [2024-04-27 00:43:52.544037] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:18.983 [2024-04-27 00:43:52.544130] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:24:18.983 00:43:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:19.242 [2024-04-27 00:43:52.748048] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:19.242 [2024-04-27 00:43:52.748162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:19.242 [2024-04-27 00:43:52.748177] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:19.242 [2024-04-27 00:43:52.748200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:19.242 [2024-04-27 00:43:52.748208] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:19.242 [2024-04-27 00:43:52.748238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:19.242 00:43:52 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:19.515 [2024-04-27 00:43:52.998742] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:19.515 BaseBdev1 00:24:19.515 00:43:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:19.515 00:43:53 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:19.515 00:43:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:19.515 00:43:53 -- common/autotest_common.sh@887 -- # local i 00:24:19.515 00:43:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:19.515 00:43:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:19.515 00:43:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:19.775 00:43:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:20.033 [ 00:24:20.033 { 00:24:20.033 "name": "BaseBdev1", 00:24:20.033 "aliases": [ 00:24:20.033 "25b552f3-6fcd-4167-9e16-1bd25772a97a" 00:24:20.033 ], 00:24:20.033 "product_name": "Malloc disk", 00:24:20.033 "block_size": 512, 00:24:20.033 "num_blocks": 65536, 00:24:20.033 "uuid": "25b552f3-6fcd-4167-9e16-1bd25772a97a", 00:24:20.033 "assigned_rate_limits": { 00:24:20.033 "rw_ios_per_sec": 0, 00:24:20.033 "rw_mbytes_per_sec": 0, 00:24:20.033 "r_mbytes_per_sec": 0, 00:24:20.033 "w_mbytes_per_sec": 0 00:24:20.033 }, 00:24:20.033 "claimed": true, 00:24:20.033 "claim_type": "exclusive_write", 00:24:20.033 "zoned": false, 00:24:20.033 "supported_io_types": { 00:24:20.033 "read": true, 00:24:20.033 "write": true, 00:24:20.033 "unmap": true, 00:24:20.033 "write_zeroes": true, 00:24:20.033 "flush": true, 00:24:20.033 "reset": true, 00:24:20.033 "compare": false, 00:24:20.033 "compare_and_write": false, 00:24:20.033 "abort": true, 00:24:20.033 "nvme_admin": false, 00:24:20.033 "nvme_io": false 00:24:20.033 }, 00:24:20.033 "memory_domains": [ 00:24:20.033 { 00:24:20.033 "dma_device_id": "system", 00:24:20.033 "dma_device_type": 1 00:24:20.033 }, 00:24:20.033 { 00:24:20.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.033 "dma_device_type": 2 00:24:20.033 } 00:24:20.033 ], 00:24:20.033 "driver_specific": {} 00:24:20.033 } 00:24:20.033 ] 00:24:20.033 00:43:53 -- common/autotest_common.sh@893 -- # return 0 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.033 00:43:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.292 00:43:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.292 "name": "Existed_Raid", 00:24:20.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.292 "strip_size_kb": 64, 00:24:20.292 "state": "configuring", 00:24:20.292 "raid_level": "raid5f", 00:24:20.292 "superblock": false, 00:24:20.292 "num_base_bdevs": 3, 00:24:20.292 "num_base_bdevs_discovered": 1, 00:24:20.292 "num_base_bdevs_operational": 3, 00:24:20.292 "base_bdevs_list": [ 00:24:20.292 { 00:24:20.292 "name": "BaseBdev1", 00:24:20.292 "uuid": "25b552f3-6fcd-4167-9e16-1bd25772a97a", 00:24:20.292 "is_configured": true, 00:24:20.292 "data_offset": 0, 00:24:20.292 "data_size": 65536 00:24:20.292 }, 00:24:20.292 { 00:24:20.292 "name": "BaseBdev2", 00:24:20.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.292 "is_configured": false, 00:24:20.292 "data_offset": 0, 00:24:20.292 "data_size": 0 00:24:20.292 }, 00:24:20.292 { 00:24:20.292 "name": "BaseBdev3", 00:24:20.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.292 "is_configured": false, 00:24:20.292 "data_offset": 0, 00:24:20.292 "data_size": 0 00:24:20.292 } 00:24:20.292 ] 00:24:20.292 }' 00:24:20.292 00:43:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.292 00:43:53 -- common/autotest_common.sh@10 -- # set +x 00:24:20.859 00:43:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:21.118 [2024-04-27 00:43:54.591172] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:21.118 [2024-04-27 00:43:54.591296] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:24:21.118 00:43:54 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:21.118 00:43:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:21.377 [2024-04-27 00:43:54.847278] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.377 [2024-04-27 00:43:54.849573] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:21.377 [2024-04-27 00:43:54.849667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:21.377 [2024-04-27 00:43:54.849689] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:21.377 [2024-04-27 00:43:54.849719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.377 00:43:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.636 00:43:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.636 "name": "Existed_Raid", 00:24:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.636 "strip_size_kb": 64, 00:24:21.636 "state": "configuring", 00:24:21.636 "raid_level": "raid5f", 00:24:21.636 "superblock": false, 00:24:21.636 "num_base_bdevs": 3, 00:24:21.636 "num_base_bdevs_discovered": 1, 00:24:21.636 "num_base_bdevs_operational": 3, 00:24:21.636 "base_bdevs_list": [ 00:24:21.636 { 00:24:21.636 "name": "BaseBdev1", 00:24:21.636 "uuid": "25b552f3-6fcd-4167-9e16-1bd25772a97a", 00:24:21.636 "is_configured": true, 00:24:21.636 "data_offset": 0, 00:24:21.636 "data_size": 65536 00:24:21.636 }, 00:24:21.636 { 00:24:21.636 "name": "BaseBdev2", 00:24:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.636 "is_configured": false, 00:24:21.636 "data_offset": 0, 00:24:21.636 "data_size": 0 00:24:21.636 }, 00:24:21.636 { 00:24:21.636 "name": "BaseBdev3", 00:24:21.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.636 "is_configured": false, 00:24:21.636 "data_offset": 0, 00:24:21.636 "data_size": 0 00:24:21.636 } 00:24:21.636 ] 00:24:21.636 }' 00:24:21.636 00:43:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.636 00:43:55 -- common/autotest_common.sh@10 -- # set +x 00:24:22.203 00:43:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:22.461 [2024-04-27 00:43:56.011330] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:22.461 BaseBdev2 00:24:22.461 00:43:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:22.461 00:43:56 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:24:22.461 00:43:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:22.461 00:43:56 -- common/autotest_common.sh@887 -- # local i 00:24:22.461 00:43:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:22.461 00:43:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:22.461 00:43:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:22.720 00:43:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:22.978 [ 00:24:22.978 { 00:24:22.978 "name": "BaseBdev2", 00:24:22.978 "aliases": [ 00:24:22.978 "b967e06d-ecd3-4f8c-92d3-026f7837bc7c" 00:24:22.978 ], 00:24:22.978 "product_name": "Malloc disk", 00:24:22.978 "block_size": 512, 00:24:22.978 "num_blocks": 65536, 00:24:22.978 "uuid": "b967e06d-ecd3-4f8c-92d3-026f7837bc7c", 00:24:22.978 "assigned_rate_limits": { 00:24:22.978 "rw_ios_per_sec": 0, 00:24:22.978 "rw_mbytes_per_sec": 0, 00:24:22.978 "r_mbytes_per_sec": 0, 00:24:22.978 "w_mbytes_per_sec": 0 00:24:22.978 }, 00:24:22.978 "claimed": true, 00:24:22.978 "claim_type": "exclusive_write", 00:24:22.978 "zoned": false, 00:24:22.978 "supported_io_types": { 00:24:22.978 "read": true, 00:24:22.978 "write": true, 00:24:22.978 "unmap": true, 00:24:22.978 "write_zeroes": true, 00:24:22.978 "flush": true, 00:24:22.978 "reset": true, 00:24:22.978 "compare": false, 00:24:22.978 "compare_and_write": false, 00:24:22.978 "abort": true, 00:24:22.978 "nvme_admin": false, 00:24:22.978 "nvme_io": false 00:24:22.978 }, 00:24:22.978 "memory_domains": [ 00:24:22.978 { 00:24:22.978 "dma_device_id": "system", 00:24:22.978 "dma_device_type": 1 00:24:22.978 }, 00:24:22.978 { 00:24:22.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.978 "dma_device_type": 2 00:24:22.978 } 00:24:22.978 ], 00:24:22.978 "driver_specific": {} 00:24:22.978 } 00:24:22.978 ] 00:24:22.978 00:43:56 -- common/autotest_common.sh@893 -- # return 0 00:24:22.978 00:43:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:22.978 00:43:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:22.978 00:43:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:22.978 00:43:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:22.978 00:43:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:22.978 00:43:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:22.978 00:43:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:22.979 00:43:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:22.979 00:43:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:22.979 00:43:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:22.979 00:43:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:22.979 00:43:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:22.979 00:43:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.979 00:43:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.237 00:43:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.237 "name": "Existed_Raid", 00:24:23.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.237 "strip_size_kb": 64, 00:24:23.237 "state": "configuring", 00:24:23.237 "raid_level": "raid5f", 00:24:23.237 "superblock": false, 00:24:23.237 "num_base_bdevs": 3, 00:24:23.237 "num_base_bdevs_discovered": 2, 00:24:23.237 "num_base_bdevs_operational": 3, 00:24:23.237 "base_bdevs_list": [ 00:24:23.237 { 00:24:23.237 "name": "BaseBdev1", 00:24:23.237 "uuid": "25b552f3-6fcd-4167-9e16-1bd25772a97a", 00:24:23.237 "is_configured": true, 00:24:23.237 "data_offset": 0, 00:24:23.237 "data_size": 65536 00:24:23.237 }, 00:24:23.237 { 00:24:23.237 "name": "BaseBdev2", 00:24:23.237 "uuid": "b967e06d-ecd3-4f8c-92d3-026f7837bc7c", 00:24:23.237 "is_configured": true, 00:24:23.237 "data_offset": 0, 00:24:23.237 "data_size": 65536 00:24:23.237 }, 00:24:23.237 { 00:24:23.238 "name": "BaseBdev3", 00:24:23.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.238 "is_configured": false, 00:24:23.238 "data_offset": 0, 00:24:23.238 "data_size": 0 00:24:23.238 } 00:24:23.238 ] 00:24:23.238 }' 00:24:23.238 00:43:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.238 00:43:56 -- common/autotest_common.sh@10 -- # set +x 00:24:23.804 00:43:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:24.062 [2024-04-27 00:43:57.618252] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:24.062 [2024-04-27 00:43:57.618428] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:24.062 [2024-04-27 00:43:57.618443] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:24:24.062 [2024-04-27 00:43:57.618556] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:24.062 [2024-04-27 00:43:57.623762] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:24.062 [2024-04-27 00:43:57.623790] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:24:24.062 [2024-04-27 00:43:57.624220] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.062 BaseBdev3 00:24:24.062 00:43:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:24.062 00:43:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:24:24.062 00:43:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:24.062 00:43:57 -- common/autotest_common.sh@887 -- # local i 00:24:24.062 00:43:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:24.062 00:43:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:24.063 00:43:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:24.322 00:43:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:24.582 [ 00:24:24.582 { 00:24:24.582 "name": "BaseBdev3", 00:24:24.582 "aliases": [ 00:24:24.582 "a2dd0e32-74bb-4cc0-843e-1d906e9213fc" 00:24:24.582 ], 00:24:24.582 "product_name": "Malloc disk", 00:24:24.582 "block_size": 512, 00:24:24.582 "num_blocks": 65536, 00:24:24.582 "uuid": "a2dd0e32-74bb-4cc0-843e-1d906e9213fc", 00:24:24.582 "assigned_rate_limits": { 00:24:24.582 "rw_ios_per_sec": 0, 00:24:24.582 "rw_mbytes_per_sec": 0, 00:24:24.582 "r_mbytes_per_sec": 0, 00:24:24.582 "w_mbytes_per_sec": 0 00:24:24.582 }, 00:24:24.582 "claimed": true, 00:24:24.582 "claim_type": "exclusive_write", 00:24:24.582 "zoned": false, 00:24:24.582 "supported_io_types": { 00:24:24.582 "read": true, 00:24:24.582 "write": true, 00:24:24.582 "unmap": true, 00:24:24.582 "write_zeroes": true, 00:24:24.582 "flush": true, 00:24:24.582 "reset": true, 00:24:24.582 "compare": false, 00:24:24.582 "compare_and_write": false, 00:24:24.582 "abort": true, 00:24:24.582 "nvme_admin": false, 00:24:24.582 "nvme_io": false 00:24:24.582 }, 00:24:24.582 "memory_domains": [ 00:24:24.582 { 00:24:24.582 "dma_device_id": "system", 00:24:24.582 "dma_device_type": 1 00:24:24.582 }, 00:24:24.582 { 00:24:24.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.582 "dma_device_type": 2 00:24:24.582 } 00:24:24.582 ], 00:24:24.582 "driver_specific": {} 00:24:24.582 } 00:24:24.582 ] 00:24:24.582 00:43:58 -- common/autotest_common.sh@893 -- # return 0 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:24.582 00:43:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:24.583 00:43:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:24.583 00:43:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:24.583 00:43:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:24.583 00:43:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.583 00:43:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.842 00:43:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.842 "name": "Existed_Raid", 00:24:24.842 "uuid": "d1557d0d-e428-45e8-be68-3e4f4f895176", 00:24:24.842 "strip_size_kb": 64, 00:24:24.842 "state": "online", 00:24:24.842 "raid_level": "raid5f", 00:24:24.842 "superblock": false, 00:24:24.842 "num_base_bdevs": 3, 00:24:24.842 "num_base_bdevs_discovered": 3, 00:24:24.842 "num_base_bdevs_operational": 3, 00:24:24.842 "base_bdevs_list": [ 00:24:24.842 { 00:24:24.842 "name": "BaseBdev1", 00:24:24.842 "uuid": "25b552f3-6fcd-4167-9e16-1bd25772a97a", 00:24:24.842 "is_configured": true, 00:24:24.842 "data_offset": 0, 00:24:24.842 "data_size": 65536 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "name": "BaseBdev2", 00:24:24.842 "uuid": "b967e06d-ecd3-4f8c-92d3-026f7837bc7c", 00:24:24.842 "is_configured": true, 00:24:24.842 "data_offset": 0, 00:24:24.842 "data_size": 65536 00:24:24.842 }, 00:24:24.842 { 00:24:24.842 "name": "BaseBdev3", 00:24:24.842 "uuid": "a2dd0e32-74bb-4cc0-843e-1d906e9213fc", 00:24:24.842 "is_configured": true, 00:24:24.842 "data_offset": 0, 00:24:24.842 "data_size": 65536 00:24:24.842 } 00:24:24.842 ] 00:24:24.842 }' 00:24:24.842 00:43:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.842 00:43:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.409 00:43:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:25.668 [2024-04-27 00:43:59.142270] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.668 00:43:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.934 00:43:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:25.934 "name": "Existed_Raid", 00:24:25.934 "uuid": "d1557d0d-e428-45e8-be68-3e4f4f895176", 00:24:25.934 "strip_size_kb": 64, 00:24:25.934 "state": "online", 00:24:25.934 "raid_level": "raid5f", 00:24:25.934 "superblock": false, 00:24:25.934 "num_base_bdevs": 3, 00:24:25.934 "num_base_bdevs_discovered": 2, 00:24:25.934 "num_base_bdevs_operational": 2, 00:24:25.934 "base_bdevs_list": [ 00:24:25.934 { 00:24:25.934 "name": null, 00:24:25.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.934 "is_configured": false, 00:24:25.934 "data_offset": 0, 00:24:25.934 "data_size": 65536 00:24:25.934 }, 00:24:25.934 { 00:24:25.934 "name": "BaseBdev2", 00:24:25.934 "uuid": "b967e06d-ecd3-4f8c-92d3-026f7837bc7c", 00:24:25.934 "is_configured": true, 00:24:25.934 "data_offset": 0, 00:24:25.934 "data_size": 65536 00:24:25.934 }, 00:24:25.934 { 00:24:25.934 "name": "BaseBdev3", 00:24:25.934 "uuid": "a2dd0e32-74bb-4cc0-843e-1d906e9213fc", 00:24:25.934 "is_configured": true, 00:24:25.934 "data_offset": 0, 00:24:25.934 "data_size": 65536 00:24:25.934 } 00:24:25.934 ] 00:24:25.934 }' 00:24:25.934 00:43:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:25.934 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:24:26.501 00:44:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:26.501 00:44:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:26.501 00:44:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.501 00:44:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:26.759 00:44:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:26.759 00:44:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:26.759 00:44:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:27.017 [2024-04-27 00:44:00.533546] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:27.017 [2024-04-27 00:44:00.533709] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:27.275 [2024-04-27 00:44:00.616487] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:27.275 00:44:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:27.275 00:44:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:27.275 00:44:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.275 00:44:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:27.535 00:44:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:27.535 00:44:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:27.535 00:44:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:27.793 [2024-04-27 00:44:01.136876] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:27.793 [2024-04-27 00:44:01.136998] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:24:27.793 00:44:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:27.793 00:44:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:27.793 00:44:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.793 00:44:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:28.052 00:44:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:28.052 00:44:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:28.052 00:44:01 -- bdev/bdev_raid.sh@287 -- # killprocess 134786 00:24:28.052 00:44:01 -- common/autotest_common.sh@936 -- # '[' -z 134786 ']' 00:24:28.052 00:44:01 -- common/autotest_common.sh@940 -- # kill -0 134786 00:24:28.052 00:44:01 -- common/autotest_common.sh@941 -- # uname 00:24:28.052 00:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.052 00:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 134786 00:24:28.052 killing process with pid 134786 00:24:28.052 00:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:28.052 00:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:28.052 00:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 134786' 00:24:28.052 00:44:01 -- common/autotest_common.sh@955 -- # kill 134786 00:24:28.052 00:44:01 -- common/autotest_common.sh@960 -- # wait 134786 00:24:28.052 [2024-04-27 00:44:01.497894] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:28.052 [2024-04-27 00:44:01.498085] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:28.986 ************************************ 00:24:28.986 END TEST raid5f_state_function_test 00:24:28.986 ************************************ 00:24:28.986 00:44:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:28.986 00:24:28.986 real 0m12.377s 00:24:28.986 user 0m21.659s 00:24:28.986 sys 0m1.647s 00:24:28.986 00:44:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:28.987 00:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:24:29.246 00:44:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:24:29.246 00:44:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:29.246 00:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.246 ************************************ 00:24:29.246 START TEST raid5f_state_function_test_sb 00:24:29.246 ************************************ 00:24:29.246 00:44:02 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 3 true 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=135167 00:24:29.246 Process raid pid: 135167 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 135167' 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:29.246 00:44:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 135167 /var/tmp/spdk-raid.sock 00:24:29.246 00:44:02 -- common/autotest_common.sh@817 -- # '[' -z 135167 ']' 00:24:29.246 00:44:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:29.246 00:44:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:29.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:29.246 00:44:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:29.246 00:44:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:29.246 00:44:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.246 [2024-04-27 00:44:02.735619] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:24:29.246 [2024-04-27 00:44:02.735823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.505 [2024-04-27 00:44:02.900948] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.763 [2024-04-27 00:44:03.129103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.763 [2024-04-27 00:44:03.337611] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.331 00:44:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:30.331 00:44:03 -- common/autotest_common.sh@850 -- # return 0 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:30.331 [2024-04-27 00:44:03.848211] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:30.331 [2024-04-27 00:44:03.848335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:30.331 [2024-04-27 00:44:03.848354] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:30.331 [2024-04-27 00:44:03.848392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:30.331 [2024-04-27 00:44:03.848404] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:30.331 [2024-04-27 00:44:03.848456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.331 00:44:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.588 00:44:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:30.588 "name": "Existed_Raid", 00:24:30.588 "uuid": "489a2bc2-c775-4a52-b88c-3987862f8d83", 00:24:30.588 "strip_size_kb": 64, 00:24:30.588 "state": "configuring", 00:24:30.588 "raid_level": "raid5f", 00:24:30.588 "superblock": true, 00:24:30.588 "num_base_bdevs": 3, 00:24:30.588 "num_base_bdevs_discovered": 0, 00:24:30.588 "num_base_bdevs_operational": 3, 00:24:30.588 "base_bdevs_list": [ 00:24:30.588 { 00:24:30.588 "name": "BaseBdev1", 00:24:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.588 "is_configured": false, 00:24:30.588 "data_offset": 0, 00:24:30.588 "data_size": 0 00:24:30.588 }, 00:24:30.588 { 00:24:30.588 "name": "BaseBdev2", 00:24:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.588 "is_configured": false, 00:24:30.588 "data_offset": 0, 00:24:30.588 "data_size": 0 00:24:30.588 }, 00:24:30.588 { 00:24:30.588 "name": "BaseBdev3", 00:24:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.588 "is_configured": false, 00:24:30.588 "data_offset": 0, 00:24:30.588 "data_size": 0 00:24:30.588 } 00:24:30.588 ] 00:24:30.588 }' 00:24:30.588 00:44:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:30.589 00:44:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.577 00:44:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:31.577 [2024-04-27 00:44:04.944219] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:31.577 [2024-04-27 00:44:04.944286] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:24:31.577 00:44:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:31.577 [2024-04-27 00:44:05.148306] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:31.577 [2024-04-27 00:44:05.148427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:31.577 [2024-04-27 00:44:05.148450] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:31.577 [2024-04-27 00:44:05.148473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:31.577 [2024-04-27 00:44:05.148482] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:31.577 [2024-04-27 00:44:05.148511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:31.835 00:44:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:32.094 [2024-04-27 00:44:05.425231] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:32.094 BaseBdev1 00:24:32.094 00:44:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:32.094 00:44:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:32.094 00:44:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:32.094 00:44:05 -- common/autotest_common.sh@887 -- # local i 00:24:32.094 00:44:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:32.094 00:44:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:32.094 00:44:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:32.094 00:44:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:32.353 [ 00:24:32.353 { 00:24:32.353 "name": "BaseBdev1", 00:24:32.353 "aliases": [ 00:24:32.353 "61405b60-9354-41d8-99c4-d79867c3f16c" 00:24:32.353 ], 00:24:32.353 "product_name": "Malloc disk", 00:24:32.353 "block_size": 512, 00:24:32.353 "num_blocks": 65536, 00:24:32.353 "uuid": "61405b60-9354-41d8-99c4-d79867c3f16c", 00:24:32.353 "assigned_rate_limits": { 00:24:32.353 "rw_ios_per_sec": 0, 00:24:32.353 "rw_mbytes_per_sec": 0, 00:24:32.353 "r_mbytes_per_sec": 0, 00:24:32.353 "w_mbytes_per_sec": 0 00:24:32.353 }, 00:24:32.353 "claimed": true, 00:24:32.353 "claim_type": "exclusive_write", 00:24:32.353 "zoned": false, 00:24:32.353 "supported_io_types": { 00:24:32.353 "read": true, 00:24:32.353 "write": true, 00:24:32.353 "unmap": true, 00:24:32.353 "write_zeroes": true, 00:24:32.353 "flush": true, 00:24:32.353 "reset": true, 00:24:32.353 "compare": false, 00:24:32.353 "compare_and_write": false, 00:24:32.353 "abort": true, 00:24:32.353 "nvme_admin": false, 00:24:32.353 "nvme_io": false 00:24:32.353 }, 00:24:32.353 "memory_domains": [ 00:24:32.353 { 00:24:32.353 "dma_device_id": "system", 00:24:32.353 "dma_device_type": 1 00:24:32.353 }, 00:24:32.353 { 00:24:32.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.353 "dma_device_type": 2 00:24:32.353 } 00:24:32.353 ], 00:24:32.353 "driver_specific": {} 00:24:32.353 } 00:24:32.353 ] 00:24:32.353 00:44:05 -- common/autotest_common.sh@893 -- # return 0 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.353 00:44:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.612 00:44:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.612 "name": "Existed_Raid", 00:24:32.612 "uuid": "9eab04d6-5d78-40cf-863f-d600fe7e2116", 00:24:32.612 "strip_size_kb": 64, 00:24:32.612 "state": "configuring", 00:24:32.612 "raid_level": "raid5f", 00:24:32.612 "superblock": true, 00:24:32.612 "num_base_bdevs": 3, 00:24:32.612 "num_base_bdevs_discovered": 1, 00:24:32.612 "num_base_bdevs_operational": 3, 00:24:32.612 "base_bdevs_list": [ 00:24:32.612 { 00:24:32.612 "name": "BaseBdev1", 00:24:32.612 "uuid": "61405b60-9354-41d8-99c4-d79867c3f16c", 00:24:32.612 "is_configured": true, 00:24:32.612 "data_offset": 2048, 00:24:32.612 "data_size": 63488 00:24:32.612 }, 00:24:32.612 { 00:24:32.612 "name": "BaseBdev2", 00:24:32.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.612 "is_configured": false, 00:24:32.612 "data_offset": 0, 00:24:32.612 "data_size": 0 00:24:32.612 }, 00:24:32.612 { 00:24:32.612 "name": "BaseBdev3", 00:24:32.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.612 "is_configured": false, 00:24:32.612 "data_offset": 0, 00:24:32.612 "data_size": 0 00:24:32.612 } 00:24:32.612 ] 00:24:32.612 }' 00:24:32.612 00:44:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.612 00:44:06 -- common/autotest_common.sh@10 -- # set +x 00:24:33.180 00:44:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:33.438 [2024-04-27 00:44:06.941622] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:33.438 [2024-04-27 00:44:06.941712] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:24:33.438 00:44:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:33.438 00:44:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:34.005 00:44:07 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:34.005 BaseBdev1 00:24:34.005 00:44:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:34.005 00:44:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:34.005 00:44:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:34.005 00:44:07 -- common/autotest_common.sh@887 -- # local i 00:24:34.005 00:44:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:34.005 00:44:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:34.005 00:44:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.263 00:44:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:34.522 [ 00:24:34.522 { 00:24:34.522 "name": "BaseBdev1", 00:24:34.522 "aliases": [ 00:24:34.522 "eaceea11-c749-4d39-8a88-9e95c804e410" 00:24:34.522 ], 00:24:34.522 "product_name": "Malloc disk", 00:24:34.522 "block_size": 512, 00:24:34.522 "num_blocks": 65536, 00:24:34.522 "uuid": "eaceea11-c749-4d39-8a88-9e95c804e410", 00:24:34.522 "assigned_rate_limits": { 00:24:34.522 "rw_ios_per_sec": 0, 00:24:34.522 "rw_mbytes_per_sec": 0, 00:24:34.522 "r_mbytes_per_sec": 0, 00:24:34.522 "w_mbytes_per_sec": 0 00:24:34.522 }, 00:24:34.522 "claimed": false, 00:24:34.522 "zoned": false, 00:24:34.522 "supported_io_types": { 00:24:34.522 "read": true, 00:24:34.522 "write": true, 00:24:34.522 "unmap": true, 00:24:34.522 "write_zeroes": true, 00:24:34.522 "flush": true, 00:24:34.522 "reset": true, 00:24:34.522 "compare": false, 00:24:34.522 "compare_and_write": false, 00:24:34.522 "abort": true, 00:24:34.522 "nvme_admin": false, 00:24:34.522 "nvme_io": false 00:24:34.522 }, 00:24:34.522 "memory_domains": [ 00:24:34.522 { 00:24:34.522 "dma_device_id": "system", 00:24:34.522 "dma_device_type": 1 00:24:34.522 }, 00:24:34.522 { 00:24:34.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.522 "dma_device_type": 2 00:24:34.522 } 00:24:34.522 ], 00:24:34.522 "driver_specific": {} 00:24:34.522 } 00:24:34.522 ] 00:24:34.522 00:44:07 -- common/autotest_common.sh@893 -- # return 0 00:24:34.522 00:44:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:24:34.780 [2024-04-27 00:44:08.201245] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:34.780 [2024-04-27 00:44:08.203712] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:34.780 [2024-04-27 00:44:08.203787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:34.780 [2024-04-27 00:44:08.203802] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:34.780 [2024-04-27 00:44:08.203835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.780 00:44:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.039 00:44:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.039 "name": "Existed_Raid", 00:24:35.039 "uuid": "4330d5df-18d2-4cbb-8cd4-76a523d92f8b", 00:24:35.039 "strip_size_kb": 64, 00:24:35.039 "state": "configuring", 00:24:35.039 "raid_level": "raid5f", 00:24:35.039 "superblock": true, 00:24:35.039 "num_base_bdevs": 3, 00:24:35.039 "num_base_bdevs_discovered": 1, 00:24:35.039 "num_base_bdevs_operational": 3, 00:24:35.039 "base_bdevs_list": [ 00:24:35.039 { 00:24:35.039 "name": "BaseBdev1", 00:24:35.039 "uuid": "eaceea11-c749-4d39-8a88-9e95c804e410", 00:24:35.039 "is_configured": true, 00:24:35.039 "data_offset": 2048, 00:24:35.039 "data_size": 63488 00:24:35.039 }, 00:24:35.039 { 00:24:35.039 "name": "BaseBdev2", 00:24:35.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.039 "is_configured": false, 00:24:35.040 "data_offset": 0, 00:24:35.040 "data_size": 0 00:24:35.040 }, 00:24:35.040 { 00:24:35.040 "name": "BaseBdev3", 00:24:35.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:35.040 "is_configured": false, 00:24:35.040 "data_offset": 0, 00:24:35.040 "data_size": 0 00:24:35.040 } 00:24:35.040 ] 00:24:35.040 }' 00:24:35.040 00:44:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.040 00:44:08 -- common/autotest_common.sh@10 -- # set +x 00:24:35.607 00:44:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:35.865 [2024-04-27 00:44:09.316548] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:35.865 BaseBdev2 00:24:35.865 00:44:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:35.865 00:44:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:24:35.865 00:44:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:35.865 00:44:09 -- common/autotest_common.sh@887 -- # local i 00:24:35.865 00:44:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:35.865 00:44:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:35.865 00:44:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.123 00:44:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:36.382 [ 00:24:36.382 { 00:24:36.382 "name": "BaseBdev2", 00:24:36.382 "aliases": [ 00:24:36.382 "a07e7f3b-fa59-476e-815b-9de20ff88e9b" 00:24:36.382 ], 00:24:36.382 "product_name": "Malloc disk", 00:24:36.382 "block_size": 512, 00:24:36.382 "num_blocks": 65536, 00:24:36.382 "uuid": "a07e7f3b-fa59-476e-815b-9de20ff88e9b", 00:24:36.382 "assigned_rate_limits": { 00:24:36.382 "rw_ios_per_sec": 0, 00:24:36.382 "rw_mbytes_per_sec": 0, 00:24:36.382 "r_mbytes_per_sec": 0, 00:24:36.382 "w_mbytes_per_sec": 0 00:24:36.382 }, 00:24:36.382 "claimed": true, 00:24:36.382 "claim_type": "exclusive_write", 00:24:36.382 "zoned": false, 00:24:36.382 "supported_io_types": { 00:24:36.382 "read": true, 00:24:36.382 "write": true, 00:24:36.382 "unmap": true, 00:24:36.382 "write_zeroes": true, 00:24:36.382 "flush": true, 00:24:36.382 "reset": true, 00:24:36.382 "compare": false, 00:24:36.382 "compare_and_write": false, 00:24:36.382 "abort": true, 00:24:36.382 "nvme_admin": false, 00:24:36.382 "nvme_io": false 00:24:36.382 }, 00:24:36.382 "memory_domains": [ 00:24:36.382 { 00:24:36.382 "dma_device_id": "system", 00:24:36.382 "dma_device_type": 1 00:24:36.382 }, 00:24:36.382 { 00:24:36.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.382 "dma_device_type": 2 00:24:36.382 } 00:24:36.382 ], 00:24:36.382 "driver_specific": {} 00:24:36.382 } 00:24:36.382 ] 00:24:36.382 00:44:09 -- common/autotest_common.sh@893 -- # return 0 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.382 00:44:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.639 00:44:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.639 "name": "Existed_Raid", 00:24:36.639 "uuid": "4330d5df-18d2-4cbb-8cd4-76a523d92f8b", 00:24:36.639 "strip_size_kb": 64, 00:24:36.639 "state": "configuring", 00:24:36.639 "raid_level": "raid5f", 00:24:36.639 "superblock": true, 00:24:36.639 "num_base_bdevs": 3, 00:24:36.639 "num_base_bdevs_discovered": 2, 00:24:36.639 "num_base_bdevs_operational": 3, 00:24:36.639 "base_bdevs_list": [ 00:24:36.639 { 00:24:36.639 "name": "BaseBdev1", 00:24:36.639 "uuid": "eaceea11-c749-4d39-8a88-9e95c804e410", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 2048, 00:24:36.639 "data_size": 63488 00:24:36.639 }, 00:24:36.639 { 00:24:36.639 "name": "BaseBdev2", 00:24:36.639 "uuid": "a07e7f3b-fa59-476e-815b-9de20ff88e9b", 00:24:36.639 "is_configured": true, 00:24:36.639 "data_offset": 2048, 00:24:36.639 "data_size": 63488 00:24:36.639 }, 00:24:36.639 { 00:24:36.639 "name": "BaseBdev3", 00:24:36.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.639 "is_configured": false, 00:24:36.639 "data_offset": 0, 00:24:36.639 "data_size": 0 00:24:36.639 } 00:24:36.639 ] 00:24:36.639 }' 00:24:36.639 00:44:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.639 00:44:10 -- common/autotest_common.sh@10 -- # set +x 00:24:37.206 00:44:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:37.466 [2024-04-27 00:44:11.039703] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:37.467 [2024-04-27 00:44:11.040019] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:37.467 [2024-04-27 00:44:11.040069] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:37.467 [2024-04-27 00:44:11.040248] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:37.467 BaseBdev3 00:24:37.467 [2024-04-27 00:44:11.045430] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:37.467 [2024-04-27 00:44:11.045460] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:24:37.467 [2024-04-27 00:44:11.045706] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.729 00:44:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:37.729 00:44:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:24:37.729 00:44:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:37.729 00:44:11 -- common/autotest_common.sh@887 -- # local i 00:24:37.729 00:44:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:37.729 00:44:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:37.729 00:44:11 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:37.987 00:44:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:37.987 [ 00:24:37.987 { 00:24:37.988 "name": "BaseBdev3", 00:24:37.988 "aliases": [ 00:24:37.988 "51d99305-6afc-4d87-beea-fabb59af24f8" 00:24:37.988 ], 00:24:37.988 "product_name": "Malloc disk", 00:24:37.988 "block_size": 512, 00:24:37.988 "num_blocks": 65536, 00:24:37.988 "uuid": "51d99305-6afc-4d87-beea-fabb59af24f8", 00:24:37.988 "assigned_rate_limits": { 00:24:37.988 "rw_ios_per_sec": 0, 00:24:37.988 "rw_mbytes_per_sec": 0, 00:24:37.988 "r_mbytes_per_sec": 0, 00:24:37.988 "w_mbytes_per_sec": 0 00:24:37.988 }, 00:24:37.988 "claimed": true, 00:24:37.988 "claim_type": "exclusive_write", 00:24:37.988 "zoned": false, 00:24:37.988 "supported_io_types": { 00:24:37.988 "read": true, 00:24:37.988 "write": true, 00:24:37.988 "unmap": true, 00:24:37.988 "write_zeroes": true, 00:24:37.988 "flush": true, 00:24:37.988 "reset": true, 00:24:37.988 "compare": false, 00:24:37.988 "compare_and_write": false, 00:24:37.988 "abort": true, 00:24:37.988 "nvme_admin": false, 00:24:37.988 "nvme_io": false 00:24:37.988 }, 00:24:37.988 "memory_domains": [ 00:24:37.988 { 00:24:37.988 "dma_device_id": "system", 00:24:37.988 "dma_device_type": 1 00:24:37.988 }, 00:24:37.988 { 00:24:37.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.988 "dma_device_type": 2 00:24:37.988 } 00:24:37.988 ], 00:24:37.988 "driver_specific": {} 00:24:37.988 } 00:24:37.988 ] 00:24:37.988 00:44:11 -- common/autotest_common.sh@893 -- # return 0 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.988 00:44:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.247 00:44:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.247 "name": "Existed_Raid", 00:24:38.247 "uuid": "4330d5df-18d2-4cbb-8cd4-76a523d92f8b", 00:24:38.247 "strip_size_kb": 64, 00:24:38.247 "state": "online", 00:24:38.247 "raid_level": "raid5f", 00:24:38.247 "superblock": true, 00:24:38.247 "num_base_bdevs": 3, 00:24:38.247 "num_base_bdevs_discovered": 3, 00:24:38.247 "num_base_bdevs_operational": 3, 00:24:38.247 "base_bdevs_list": [ 00:24:38.247 { 00:24:38.247 "name": "BaseBdev1", 00:24:38.247 "uuid": "eaceea11-c749-4d39-8a88-9e95c804e410", 00:24:38.247 "is_configured": true, 00:24:38.247 "data_offset": 2048, 00:24:38.247 "data_size": 63488 00:24:38.247 }, 00:24:38.247 { 00:24:38.247 "name": "BaseBdev2", 00:24:38.247 "uuid": "a07e7f3b-fa59-476e-815b-9de20ff88e9b", 00:24:38.247 "is_configured": true, 00:24:38.247 "data_offset": 2048, 00:24:38.247 "data_size": 63488 00:24:38.247 }, 00:24:38.247 { 00:24:38.247 "name": "BaseBdev3", 00:24:38.247 "uuid": "51d99305-6afc-4d87-beea-fabb59af24f8", 00:24:38.247 "is_configured": true, 00:24:38.247 "data_offset": 2048, 00:24:38.247 "data_size": 63488 00:24:38.247 } 00:24:38.247 ] 00:24:38.247 }' 00:24:38.247 00:44:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.247 00:44:11 -- common/autotest_common.sh@10 -- # set +x 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:39.181 [2024-04-27 00:44:12.595390] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.181 00:44:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.438 00:44:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:39.438 "name": "Existed_Raid", 00:24:39.438 "uuid": "4330d5df-18d2-4cbb-8cd4-76a523d92f8b", 00:24:39.438 "strip_size_kb": 64, 00:24:39.438 "state": "online", 00:24:39.438 "raid_level": "raid5f", 00:24:39.438 "superblock": true, 00:24:39.438 "num_base_bdevs": 3, 00:24:39.438 "num_base_bdevs_discovered": 2, 00:24:39.438 "num_base_bdevs_operational": 2, 00:24:39.438 "base_bdevs_list": [ 00:24:39.438 { 00:24:39.438 "name": null, 00:24:39.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.438 "is_configured": false, 00:24:39.438 "data_offset": 2048, 00:24:39.438 "data_size": 63488 00:24:39.438 }, 00:24:39.438 { 00:24:39.438 "name": "BaseBdev2", 00:24:39.438 "uuid": "a07e7f3b-fa59-476e-815b-9de20ff88e9b", 00:24:39.438 "is_configured": true, 00:24:39.438 "data_offset": 2048, 00:24:39.438 "data_size": 63488 00:24:39.438 }, 00:24:39.438 { 00:24:39.438 "name": "BaseBdev3", 00:24:39.438 "uuid": "51d99305-6afc-4d87-beea-fabb59af24f8", 00:24:39.438 "is_configured": true, 00:24:39.438 "data_offset": 2048, 00:24:39.438 "data_size": 63488 00:24:39.438 } 00:24:39.438 ] 00:24:39.438 }' 00:24:39.438 00:44:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:39.438 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:40.372 00:44:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:40.372 00:44:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:40.372 00:44:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.372 00:44:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:40.372 00:44:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:40.372 00:44:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:40.372 00:44:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:40.631 [2024-04-27 00:44:14.074271] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:40.631 [2024-04-27 00:44:14.074457] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.631 [2024-04-27 00:44:14.140206] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.631 00:44:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:40.631 00:44:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:40.631 00:44:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.631 00:44:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:40.889 00:44:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:40.889 00:44:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:40.889 00:44:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:41.148 [2024-04-27 00:44:14.664538] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:41.148 [2024-04-27 00:44:14.664627] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:24:41.407 00:44:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:41.407 00:44:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:41.407 00:44:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.407 00:44:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:41.666 00:44:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:41.666 00:44:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:41.666 00:44:15 -- bdev/bdev_raid.sh@287 -- # killprocess 135167 00:24:41.666 00:44:15 -- common/autotest_common.sh@936 -- # '[' -z 135167 ']' 00:24:41.666 00:44:15 -- common/autotest_common.sh@940 -- # kill -0 135167 00:24:41.666 00:44:15 -- common/autotest_common.sh@941 -- # uname 00:24:41.666 00:44:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:41.666 00:44:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135167 00:24:41.666 killing process with pid 135167 00:24:41.666 00:44:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:41.666 00:44:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:41.666 00:44:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135167' 00:24:41.666 00:44:15 -- common/autotest_common.sh@955 -- # kill 135167 00:24:41.666 00:44:15 -- common/autotest_common.sh@960 -- # wait 135167 00:24:41.666 [2024-04-27 00:44:15.044317] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.666 [2024-04-27 00:44:15.044477] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:42.601 ************************************ 00:24:42.601 END TEST raid5f_state_function_test_sb 00:24:42.601 ************************************ 00:24:42.601 00:44:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:42.601 00:24:42.601 real 0m13.382s 00:24:42.602 user 0m23.537s 00:24:42.602 sys 0m1.643s 00:24:42.602 00:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:42.602 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:24:42.602 00:44:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:24:42.602 00:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:42.602 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.602 ************************************ 00:24:42.602 START TEST raid5f_superblock_test 00:24:42.602 ************************************ 00:24:42.602 00:44:16 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 3 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@357 -- # raid_pid=135565 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:42.602 00:44:16 -- bdev/bdev_raid.sh@358 -- # waitforlisten 135565 /var/tmp/spdk-raid.sock 00:24:42.602 00:44:16 -- common/autotest_common.sh@817 -- # '[' -z 135565 ']' 00:24:42.602 00:44:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:42.602 00:44:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:42.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:42.602 00:44:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:42.602 00:44:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:42.602 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:24:42.602 [2024-04-27 00:44:16.185406] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:24:42.602 [2024-04-27 00:44:16.185627] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135565 ] 00:24:42.860 [2024-04-27 00:44:16.354293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.118 [2024-04-27 00:44:16.611261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.377 [2024-04-27 00:44:16.814154] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:43.636 00:44:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:43.636 00:44:17 -- common/autotest_common.sh@850 -- # return 0 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:43.636 00:44:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:43.637 00:44:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:43.895 malloc1 00:24:43.895 00:44:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:43.895 [2024-04-27 00:44:17.481355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:43.895 [2024-04-27 00:44:17.481708] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:43.895 [2024-04-27 00:44:17.481790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:24:43.895 [2024-04-27 00:44:17.482151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.153 [2024-04-27 00:44:17.485066] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.153 [2024-04-27 00:44:17.485261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:44.153 pt1 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:44.153 00:44:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:44.412 malloc2 00:24:44.412 00:44:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:44.412 [2024-04-27 00:44:17.986201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:44.412 [2024-04-27 00:44:17.986554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.412 [2024-04-27 00:44:17.986644] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:24:44.412 [2024-04-27 00:44:17.987058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.412 [2024-04-27 00:44:17.989668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.412 [2024-04-27 00:44:17.989851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:44.412 pt2 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:44.670 00:44:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:44.670 malloc3 00:24:44.670 00:44:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:44.929 [2024-04-27 00:44:18.438933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:44.929 [2024-04-27 00:44:18.439342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.929 [2024-04-27 00:44:18.439536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:24:44.929 [2024-04-27 00:44:18.439720] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.929 [2024-04-27 00:44:18.442609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.929 [2024-04-27 00:44:18.442850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:44.929 pt3 00:24:44.929 00:44:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:44.929 00:44:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:44.929 00:44:18 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:24:45.187 [2024-04-27 00:44:18.687398] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:45.187 [2024-04-27 00:44:18.689938] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:45.187 [2024-04-27 00:44:18.690164] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:45.187 [2024-04-27 00:44:18.690510] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:24:45.187 [2024-04-27 00:44:18.690671] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:45.187 [2024-04-27 00:44:18.690891] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:45.187 [2024-04-27 00:44:18.695685] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:24:45.187 [2024-04-27 00:44:18.695843] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:24:45.187 [2024-04-27 00:44:18.696205] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.187 00:44:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.445 00:44:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.445 "name": "raid_bdev1", 00:24:45.445 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:45.445 "strip_size_kb": 64, 00:24:45.445 "state": "online", 00:24:45.445 "raid_level": "raid5f", 00:24:45.445 "superblock": true, 00:24:45.445 "num_base_bdevs": 3, 00:24:45.445 "num_base_bdevs_discovered": 3, 00:24:45.445 "num_base_bdevs_operational": 3, 00:24:45.445 "base_bdevs_list": [ 00:24:45.445 { 00:24:45.445 "name": "pt1", 00:24:45.445 "uuid": "9528dfab-af8e-51dd-99fa-67fe78040471", 00:24:45.445 "is_configured": true, 00:24:45.445 "data_offset": 2048, 00:24:45.445 "data_size": 63488 00:24:45.445 }, 00:24:45.445 { 00:24:45.445 "name": "pt2", 00:24:45.445 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:45.445 "is_configured": true, 00:24:45.445 "data_offset": 2048, 00:24:45.445 "data_size": 63488 00:24:45.445 }, 00:24:45.445 { 00:24:45.445 "name": "pt3", 00:24:45.445 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:45.445 "is_configured": true, 00:24:45.445 "data_offset": 2048, 00:24:45.445 "data_size": 63488 00:24:45.445 } 00:24:45.445 ] 00:24:45.445 }' 00:24:45.445 00:44:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.445 00:44:18 -- common/autotest_common.sh@10 -- # set +x 00:24:46.380 00:44:19 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:46.380 00:44:19 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:46.380 [2024-04-27 00:44:19.843141] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:46.380 00:44:19 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=52812940-b54b-4297-a91e-82ab2b9d53be 00:24:46.380 00:44:19 -- bdev/bdev_raid.sh@380 -- # '[' -z 52812940-b54b-4297-a91e-82ab2b9d53be ']' 00:24:46.380 00:44:19 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:46.639 [2024-04-27 00:44:20.094971] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:46.639 [2024-04-27 00:44:20.095314] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.639 [2024-04-27 00:44:20.095507] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.639 [2024-04-27 00:44:20.095719] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.639 [2024-04-27 00:44:20.095832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:24:46.639 00:44:20 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.639 00:44:20 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:46.898 00:44:20 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:46.898 00:44:20 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:46.898 00:44:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:46.898 00:44:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:47.156 00:44:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:47.156 00:44:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:47.414 00:44:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:47.414 00:44:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:47.673 00:44:21 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:47.673 00:44:21 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:47.931 00:44:21 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:47.931 00:44:21 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:47.931 00:44:21 -- common/autotest_common.sh@638 -- # local es=0 00:24:47.931 00:44:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:47.931 00:44:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.931 00:44:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:47.931 00:44:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.931 00:44:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:47.931 00:44:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.931 00:44:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:47.931 00:44:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.931 00:44:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:47.931 00:44:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:24:48.190 [2024-04-27 00:44:21.659311] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:48.190 [2024-04-27 00:44:21.661493] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:48.190 [2024-04-27 00:44:21.661699] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:48.190 [2024-04-27 00:44:21.661798] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:48.190 [2024-04-27 00:44:21.662036] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:48.190 [2024-04-27 00:44:21.662207] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:48.190 [2024-04-27 00:44:21.662294] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:48.190 [2024-04-27 00:44:21.662451] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:24:48.190 request: 00:24:48.190 { 00:24:48.190 "name": "raid_bdev1", 00:24:48.190 "raid_level": "raid5f", 00:24:48.190 "base_bdevs": [ 00:24:48.190 "malloc1", 00:24:48.190 "malloc2", 00:24:48.190 "malloc3" 00:24:48.190 ], 00:24:48.190 "superblock": false, 00:24:48.190 "strip_size_kb": 64, 00:24:48.190 "method": "bdev_raid_create", 00:24:48.190 "req_id": 1 00:24:48.190 } 00:24:48.190 Got JSON-RPC error response 00:24:48.190 response: 00:24:48.190 { 00:24:48.190 "code": -17, 00:24:48.190 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:48.190 } 00:24:48.190 00:44:21 -- common/autotest_common.sh@641 -- # es=1 00:24:48.190 00:44:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:48.190 00:44:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:48.190 00:44:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:48.190 00:44:21 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.190 00:44:21 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:48.449 00:44:21 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:48.449 00:44:21 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:48.449 00:44:21 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:48.708 [2024-04-27 00:44:22.200605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:48.708 [2024-04-27 00:44:22.200894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.708 [2024-04-27 00:44:22.201088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:24:48.708 [2024-04-27 00:44:22.201234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.708 [2024-04-27 00:44:22.203898] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.708 [2024-04-27 00:44:22.204087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:48.708 [2024-04-27 00:44:22.204351] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:48.708 [2024-04-27 00:44:22.204511] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:48.708 pt1 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.708 00:44:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.967 00:44:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.967 "name": "raid_bdev1", 00:24:48.967 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:48.967 "strip_size_kb": 64, 00:24:48.967 "state": "configuring", 00:24:48.967 "raid_level": "raid5f", 00:24:48.967 "superblock": true, 00:24:48.967 "num_base_bdevs": 3, 00:24:48.967 "num_base_bdevs_discovered": 1, 00:24:48.967 "num_base_bdevs_operational": 3, 00:24:48.967 "base_bdevs_list": [ 00:24:48.967 { 00:24:48.967 "name": "pt1", 00:24:48.967 "uuid": "9528dfab-af8e-51dd-99fa-67fe78040471", 00:24:48.967 "is_configured": true, 00:24:48.967 "data_offset": 2048, 00:24:48.967 "data_size": 63488 00:24:48.967 }, 00:24:48.967 { 00:24:48.967 "name": null, 00:24:48.967 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:48.967 "is_configured": false, 00:24:48.967 "data_offset": 2048, 00:24:48.967 "data_size": 63488 00:24:48.967 }, 00:24:48.967 { 00:24:48.967 "name": null, 00:24:48.967 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:48.967 "is_configured": false, 00:24:48.967 "data_offset": 2048, 00:24:48.967 "data_size": 63488 00:24:48.967 } 00:24:48.967 ] 00:24:48.967 }' 00:24:48.967 00:44:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.967 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.535 00:44:23 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:24:49.535 00:44:23 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:49.793 [2024-04-27 00:44:23.309111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:49.793 [2024-04-27 00:44:23.309404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.793 [2024-04-27 00:44:23.309590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:49.793 [2024-04-27 00:44:23.309730] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.793 [2024-04-27 00:44:23.310296] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.793 [2024-04-27 00:44:23.310516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:49.793 [2024-04-27 00:44:23.310786] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:49.793 [2024-04-27 00:44:23.310935] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:49.793 pt2 00:24:49.793 00:44:23 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:50.051 [2024-04-27 00:44:23.585188] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.051 00:44:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.308 00:44:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.308 "name": "raid_bdev1", 00:24:50.308 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:50.308 "strip_size_kb": 64, 00:24:50.308 "state": "configuring", 00:24:50.308 "raid_level": "raid5f", 00:24:50.308 "superblock": true, 00:24:50.308 "num_base_bdevs": 3, 00:24:50.308 "num_base_bdevs_discovered": 1, 00:24:50.308 "num_base_bdevs_operational": 3, 00:24:50.308 "base_bdevs_list": [ 00:24:50.308 { 00:24:50.308 "name": "pt1", 00:24:50.308 "uuid": "9528dfab-af8e-51dd-99fa-67fe78040471", 00:24:50.308 "is_configured": true, 00:24:50.308 "data_offset": 2048, 00:24:50.308 "data_size": 63488 00:24:50.308 }, 00:24:50.308 { 00:24:50.308 "name": null, 00:24:50.308 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:50.308 "is_configured": false, 00:24:50.308 "data_offset": 2048, 00:24:50.308 "data_size": 63488 00:24:50.308 }, 00:24:50.308 { 00:24:50.308 "name": null, 00:24:50.308 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:50.308 "is_configured": false, 00:24:50.308 "data_offset": 2048, 00:24:50.308 "data_size": 63488 00:24:50.308 } 00:24:50.308 ] 00:24:50.308 }' 00:24:50.308 00:44:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.308 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:24:51.241 00:44:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:51.241 00:44:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:51.241 00:44:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:51.241 [2024-04-27 00:44:24.773441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:51.241 [2024-04-27 00:44:24.773575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.241 [2024-04-27 00:44:24.773618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:51.241 [2024-04-27 00:44:24.773646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.241 [2024-04-27 00:44:24.774180] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.241 [2024-04-27 00:44:24.774230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:51.241 [2024-04-27 00:44:24.774417] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:51.241 [2024-04-27 00:44:24.774446] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:51.241 pt2 00:24:51.241 00:44:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:51.241 00:44:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:51.241 00:44:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:51.499 [2024-04-27 00:44:25.029478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:51.499 [2024-04-27 00:44:25.029578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.499 [2024-04-27 00:44:25.029616] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:51.499 [2024-04-27 00:44:25.029646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.499 [2024-04-27 00:44:25.030152] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.499 [2024-04-27 00:44:25.030201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:51.499 [2024-04-27 00:44:25.030336] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:51.499 [2024-04-27 00:44:25.030399] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:51.499 [2024-04-27 00:44:25.030549] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:24:51.499 [2024-04-27 00:44:25.030564] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:51.499 [2024-04-27 00:44:25.030673] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:51.499 [2024-04-27 00:44:25.035409] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:24:51.499 [2024-04-27 00:44:25.035433] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:24:51.499 [2024-04-27 00:44:25.035618] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.499 pt3 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.499 00:44:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.757 00:44:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.757 "name": "raid_bdev1", 00:24:51.757 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:51.757 "strip_size_kb": 64, 00:24:51.757 "state": "online", 00:24:51.757 "raid_level": "raid5f", 00:24:51.757 "superblock": true, 00:24:51.757 "num_base_bdevs": 3, 00:24:51.757 "num_base_bdevs_discovered": 3, 00:24:51.757 "num_base_bdevs_operational": 3, 00:24:51.757 "base_bdevs_list": [ 00:24:51.757 { 00:24:51.757 "name": "pt1", 00:24:51.757 "uuid": "9528dfab-af8e-51dd-99fa-67fe78040471", 00:24:51.757 "is_configured": true, 00:24:51.757 "data_offset": 2048, 00:24:51.757 "data_size": 63488 00:24:51.757 }, 00:24:51.757 { 00:24:51.757 "name": "pt2", 00:24:51.757 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:51.757 "is_configured": true, 00:24:51.757 "data_offset": 2048, 00:24:51.757 "data_size": 63488 00:24:51.757 }, 00:24:51.757 { 00:24:51.757 "name": "pt3", 00:24:51.757 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:51.757 "is_configured": true, 00:24:51.757 "data_offset": 2048, 00:24:51.757 "data_size": 63488 00:24:51.757 } 00:24:51.757 ] 00:24:51.757 }' 00:24:51.757 00:44:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.757 00:44:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.323 00:44:25 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:52.323 00:44:25 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:52.580 [2024-04-27 00:44:26.060867] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.580 00:44:26 -- bdev/bdev_raid.sh@430 -- # '[' 52812940-b54b-4297-a91e-82ab2b9d53be '!=' 52812940-b54b-4297-a91e-82ab2b9d53be ']' 00:24:52.580 00:44:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:52.580 00:44:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:52.580 00:44:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:52.580 00:44:26 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:52.838 [2024-04-27 00:44:26.272757] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.838 00:44:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.096 00:44:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:53.096 "name": "raid_bdev1", 00:24:53.096 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:53.096 "strip_size_kb": 64, 00:24:53.096 "state": "online", 00:24:53.096 "raid_level": "raid5f", 00:24:53.096 "superblock": true, 00:24:53.096 "num_base_bdevs": 3, 00:24:53.096 "num_base_bdevs_discovered": 2, 00:24:53.096 "num_base_bdevs_operational": 2, 00:24:53.096 "base_bdevs_list": [ 00:24:53.096 { 00:24:53.096 "name": null, 00:24:53.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.096 "is_configured": false, 00:24:53.096 "data_offset": 2048, 00:24:53.096 "data_size": 63488 00:24:53.096 }, 00:24:53.096 { 00:24:53.096 "name": "pt2", 00:24:53.096 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:53.096 "is_configured": true, 00:24:53.096 "data_offset": 2048, 00:24:53.096 "data_size": 63488 00:24:53.096 }, 00:24:53.096 { 00:24:53.096 "name": "pt3", 00:24:53.096 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:53.096 "is_configured": true, 00:24:53.096 "data_offset": 2048, 00:24:53.096 "data_size": 63488 00:24:53.096 } 00:24:53.096 ] 00:24:53.096 }' 00:24:53.096 00:44:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:53.096 00:44:26 -- common/autotest_common.sh@10 -- # set +x 00:24:53.660 00:44:27 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:53.923 [2024-04-27 00:44:27.340927] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:53.923 [2024-04-27 00:44:27.340965] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:53.923 [2024-04-27 00:44:27.341054] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:53.923 [2024-04-27 00:44:27.341138] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:53.923 [2024-04-27 00:44:27.341151] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:24:53.923 00:44:27 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:53.923 00:44:27 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.194 00:44:27 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:54.195 00:44:27 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:54.195 00:44:27 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:54.195 00:44:27 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:54.195 00:44:27 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:54.452 00:44:27 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:54.452 00:44:27 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:54.452 00:44:27 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:54.710 00:44:28 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:54.710 00:44:28 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:54.710 00:44:28 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:54.710 00:44:28 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:54.710 00:44:28 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:54.967 [2024-04-27 00:44:28.341145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:54.967 [2024-04-27 00:44:28.341251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.967 [2024-04-27 00:44:28.341297] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:24:54.967 [2024-04-27 00:44:28.341326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.967 [2024-04-27 00:44:28.343908] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.967 [2024-04-27 00:44:28.343991] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:54.967 [2024-04-27 00:44:28.344143] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:54.967 [2024-04-27 00:44:28.344210] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:54.967 pt2 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.968 00:44:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.225 00:44:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:55.225 "name": "raid_bdev1", 00:24:55.225 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:55.225 "strip_size_kb": 64, 00:24:55.225 "state": "configuring", 00:24:55.225 "raid_level": "raid5f", 00:24:55.225 "superblock": true, 00:24:55.225 "num_base_bdevs": 3, 00:24:55.225 "num_base_bdevs_discovered": 1, 00:24:55.225 "num_base_bdevs_operational": 2, 00:24:55.225 "base_bdevs_list": [ 00:24:55.225 { 00:24:55.225 "name": null, 00:24:55.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.225 "is_configured": false, 00:24:55.225 "data_offset": 2048, 00:24:55.225 "data_size": 63488 00:24:55.225 }, 00:24:55.225 { 00:24:55.225 "name": "pt2", 00:24:55.225 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:55.225 "is_configured": true, 00:24:55.225 "data_offset": 2048, 00:24:55.225 "data_size": 63488 00:24:55.225 }, 00:24:55.225 { 00:24:55.225 "name": null, 00:24:55.225 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:55.225 "is_configured": false, 00:24:55.225 "data_offset": 2048, 00:24:55.225 "data_size": 63488 00:24:55.225 } 00:24:55.225 ] 00:24:55.225 }' 00:24:55.225 00:44:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:55.225 00:44:28 -- common/autotest_common.sh@10 -- # set +x 00:24:55.791 00:44:29 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:55.791 00:44:29 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:55.791 00:44:29 -- bdev/bdev_raid.sh@462 -- # i=2 00:24:55.791 00:44:29 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:56.049 [2024-04-27 00:44:29.573441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:56.049 [2024-04-27 00:44:29.573573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.049 [2024-04-27 00:44:29.573618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:56.049 [2024-04-27 00:44:29.573645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.049 [2024-04-27 00:44:29.574186] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.049 [2024-04-27 00:44:29.574230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:56.049 [2024-04-27 00:44:29.574405] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:56.049 [2024-04-27 00:44:29.574442] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:56.049 [2024-04-27 00:44:29.574578] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:24:56.049 [2024-04-27 00:44:29.574591] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:56.049 [2024-04-27 00:44:29.574678] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:56.049 [2024-04-27 00:44:29.579337] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:24:56.049 [2024-04-27 00:44:29.579363] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:24:56.049 [2024-04-27 00:44:29.579712] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.049 pt3 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.049 00:44:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.307 00:44:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:56.307 "name": "raid_bdev1", 00:24:56.307 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:56.307 "strip_size_kb": 64, 00:24:56.307 "state": "online", 00:24:56.307 "raid_level": "raid5f", 00:24:56.307 "superblock": true, 00:24:56.307 "num_base_bdevs": 3, 00:24:56.307 "num_base_bdevs_discovered": 2, 00:24:56.307 "num_base_bdevs_operational": 2, 00:24:56.307 "base_bdevs_list": [ 00:24:56.307 { 00:24:56.307 "name": null, 00:24:56.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.307 "is_configured": false, 00:24:56.307 "data_offset": 2048, 00:24:56.307 "data_size": 63488 00:24:56.307 }, 00:24:56.307 { 00:24:56.307 "name": "pt2", 00:24:56.307 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:56.307 "is_configured": true, 00:24:56.307 "data_offset": 2048, 00:24:56.307 "data_size": 63488 00:24:56.307 }, 00:24:56.307 { 00:24:56.307 "name": "pt3", 00:24:56.307 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:56.307 "is_configured": true, 00:24:56.307 "data_offset": 2048, 00:24:56.307 "data_size": 63488 00:24:56.307 } 00:24:56.307 ] 00:24:56.307 }' 00:24:56.307 00:44:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:56.307 00:44:29 -- common/autotest_common.sh@10 -- # set +x 00:24:56.873 00:44:30 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:24:56.873 00:44:30 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:57.131 [2024-04-27 00:44:30.641284] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:57.131 [2024-04-27 00:44:30.641316] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:57.131 [2024-04-27 00:44:30.641414] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:57.131 [2024-04-27 00:44:30.641524] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:57.131 [2024-04-27 00:44:30.641535] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:24:57.131 00:44:30 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.131 00:44:30 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:57.390 00:44:30 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:57.390 00:44:30 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:57.390 00:44:30 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:57.648 [2024-04-27 00:44:31.113376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:57.648 [2024-04-27 00:44:31.113531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.648 [2024-04-27 00:44:31.113571] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:24:57.648 [2024-04-27 00:44:31.113599] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.648 [2024-04-27 00:44:31.115919] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.648 [2024-04-27 00:44:31.115981] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:57.648 [2024-04-27 00:44:31.116119] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:57.648 [2024-04-27 00:44:31.116174] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:57.648 pt1 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.648 00:44:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.907 00:44:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.907 "name": "raid_bdev1", 00:24:57.907 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:57.907 "strip_size_kb": 64, 00:24:57.907 "state": "configuring", 00:24:57.907 "raid_level": "raid5f", 00:24:57.907 "superblock": true, 00:24:57.907 "num_base_bdevs": 3, 00:24:57.907 "num_base_bdevs_discovered": 1, 00:24:57.907 "num_base_bdevs_operational": 3, 00:24:57.907 "base_bdevs_list": [ 00:24:57.907 { 00:24:57.907 "name": "pt1", 00:24:57.907 "uuid": "9528dfab-af8e-51dd-99fa-67fe78040471", 00:24:57.907 "is_configured": true, 00:24:57.907 "data_offset": 2048, 00:24:57.907 "data_size": 63488 00:24:57.907 }, 00:24:57.907 { 00:24:57.907 "name": null, 00:24:57.907 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:57.907 "is_configured": false, 00:24:57.907 "data_offset": 2048, 00:24:57.907 "data_size": 63488 00:24:57.907 }, 00:24:57.907 { 00:24:57.907 "name": null, 00:24:57.907 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:57.907 "is_configured": false, 00:24:57.907 "data_offset": 2048, 00:24:57.907 "data_size": 63488 00:24:57.907 } 00:24:57.907 ] 00:24:57.907 }' 00:24:57.907 00:44:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.907 00:44:31 -- common/autotest_common.sh@10 -- # set +x 00:24:58.474 00:44:31 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:58.474 00:44:31 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:58.474 00:44:31 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:58.732 00:44:32 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:58.732 00:44:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:58.732 00:44:32 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:58.990 00:44:32 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:58.990 00:44:32 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:58.990 00:44:32 -- bdev/bdev_raid.sh@489 -- # i=2 00:24:58.990 00:44:32 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:58.990 [2024-04-27 00:44:32.569791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:58.990 [2024-04-27 00:44:32.569911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.990 [2024-04-27 00:44:32.569968] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:58.990 [2024-04-27 00:44:32.569997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.990 [2024-04-27 00:44:32.570680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.990 [2024-04-27 00:44:32.570756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:58.990 [2024-04-27 00:44:32.570896] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:58.990 [2024-04-27 00:44:32.570912] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:58.990 [2024-04-27 00:44:32.570940] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:58.990 [2024-04-27 00:44:32.570960] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:24:58.990 [2024-04-27 00:44:32.571063] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:58.990 pt3 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.248 "name": "raid_bdev1", 00:24:59.248 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:24:59.248 "strip_size_kb": 64, 00:24:59.248 "state": "configuring", 00:24:59.248 "raid_level": "raid5f", 00:24:59.248 "superblock": true, 00:24:59.248 "num_base_bdevs": 3, 00:24:59.248 "num_base_bdevs_discovered": 1, 00:24:59.248 "num_base_bdevs_operational": 2, 00:24:59.248 "base_bdevs_list": [ 00:24:59.248 { 00:24:59.248 "name": null, 00:24:59.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.248 "is_configured": false, 00:24:59.248 "data_offset": 2048, 00:24:59.248 "data_size": 63488 00:24:59.248 }, 00:24:59.248 { 00:24:59.248 "name": null, 00:24:59.248 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:24:59.248 "is_configured": false, 00:24:59.248 "data_offset": 2048, 00:24:59.248 "data_size": 63488 00:24:59.248 }, 00:24:59.248 { 00:24:59.248 "name": "pt3", 00:24:59.248 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:24:59.248 "is_configured": true, 00:24:59.248 "data_offset": 2048, 00:24:59.248 "data_size": 63488 00:24:59.248 } 00:24:59.248 ] 00:24:59.248 }' 00:24:59.248 00:44:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.248 00:44:32 -- common/autotest_common.sh@10 -- # set +x 00:24:59.817 00:44:33 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:59.817 00:44:33 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:59.817 00:44:33 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:00.075 [2024-04-27 00:44:33.638074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:00.075 [2024-04-27 00:44:33.638211] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.075 [2024-04-27 00:44:33.638250] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:00.075 [2024-04-27 00:44:33.638280] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.075 [2024-04-27 00:44:33.638937] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.075 [2024-04-27 00:44:33.638986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:00.075 [2024-04-27 00:44:33.639093] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:00.075 [2024-04-27 00:44:33.639145] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:00.075 [2024-04-27 00:44:33.639293] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:25:00.075 [2024-04-27 00:44:33.639307] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:00.075 [2024-04-27 00:44:33.639402] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:25:00.075 [2024-04-27 00:44:33.643991] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:25:00.075 [2024-04-27 00:44:33.644015] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:25:00.075 [2024-04-27 00:44:33.644283] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.075 pt2 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:00.075 00:44:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:00.335 00:44:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.335 00:44:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.335 00:44:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:00.335 "name": "raid_bdev1", 00:25:00.335 "uuid": "52812940-b54b-4297-a91e-82ab2b9d53be", 00:25:00.335 "strip_size_kb": 64, 00:25:00.335 "state": "online", 00:25:00.335 "raid_level": "raid5f", 00:25:00.335 "superblock": true, 00:25:00.335 "num_base_bdevs": 3, 00:25:00.335 "num_base_bdevs_discovered": 2, 00:25:00.335 "num_base_bdevs_operational": 2, 00:25:00.335 "base_bdevs_list": [ 00:25:00.335 { 00:25:00.335 "name": null, 00:25:00.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.335 "is_configured": false, 00:25:00.335 "data_offset": 2048, 00:25:00.335 "data_size": 63488 00:25:00.335 }, 00:25:00.335 { 00:25:00.335 "name": "pt2", 00:25:00.335 "uuid": "636f4078-80e5-5a34-879d-285063216d36", 00:25:00.335 "is_configured": true, 00:25:00.335 "data_offset": 2048, 00:25:00.335 "data_size": 63488 00:25:00.335 }, 00:25:00.335 { 00:25:00.335 "name": "pt3", 00:25:00.335 "uuid": "3ae7611b-2159-5c5c-ac1f-a17db007b647", 00:25:00.335 "is_configured": true, 00:25:00.335 "data_offset": 2048, 00:25:00.335 "data_size": 63488 00:25:00.335 } 00:25:00.335 ] 00:25:00.335 }' 00:25:00.335 00:44:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:00.335 00:44:33 -- common/autotest_common.sh@10 -- # set +x 00:25:00.901 00:44:34 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:00.901 00:44:34 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:01.159 [2024-04-27 00:44:34.677836] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:01.159 00:44:34 -- bdev/bdev_raid.sh@506 -- # '[' 52812940-b54b-4297-a91e-82ab2b9d53be '!=' 52812940-b54b-4297-a91e-82ab2b9d53be ']' 00:25:01.159 00:44:34 -- bdev/bdev_raid.sh@511 -- # killprocess 135565 00:25:01.159 00:44:34 -- common/autotest_common.sh@936 -- # '[' -z 135565 ']' 00:25:01.159 00:44:34 -- common/autotest_common.sh@940 -- # kill -0 135565 00:25:01.159 00:44:34 -- common/autotest_common.sh@941 -- # uname 00:25:01.159 00:44:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:01.159 00:44:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 135565 00:25:01.159 00:44:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:01.159 killing process with pid 135565 00:25:01.159 00:44:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:01.159 00:44:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 135565' 00:25:01.159 00:44:34 -- common/autotest_common.sh@955 -- # kill 135565 00:25:01.159 [2024-04-27 00:44:34.719648] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:01.159 00:44:34 -- common/autotest_common.sh@960 -- # wait 135565 00:25:01.159 [2024-04-27 00:44:34.719731] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:01.159 [2024-04-27 00:44:34.719792] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:01.159 [2024-04-27 00:44:34.719804] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:25:01.418 [2024-04-27 00:44:34.933033] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:02.354 00:44:35 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:02.354 ************************************ 00:25:02.354 END TEST raid5f_superblock_test 00:25:02.354 ************************************ 00:25:02.354 00:25:02.354 real 0m19.806s 00:25:02.354 user 0m36.280s 00:25:02.354 sys 0m2.315s 00:25:02.354 00:44:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:02.354 00:44:35 -- common/autotest_common.sh@10 -- # set +x 00:25:02.613 00:44:35 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:02.613 00:44:35 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:25:02.613 00:44:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:02.613 00:44:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:02.613 00:44:35 -- common/autotest_common.sh@10 -- # set +x 00:25:02.613 ************************************ 00:25:02.613 START TEST raid5f_rebuild_test 00:25:02.613 ************************************ 00:25:02.613 00:44:36 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 false false 00:25:02.613 00:44:36 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:02.613 00:44:36 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:25:02.613 00:44:36 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:02.613 00:44:36 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:02.613 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@544 -- # raid_pid=136184 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:02.614 00:44:36 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136184 /var/tmp/spdk-raid.sock 00:25:02.614 00:44:36 -- common/autotest_common.sh@817 -- # '[' -z 136184 ']' 00:25:02.614 00:44:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:02.614 00:44:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:02.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:02.614 00:44:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:02.614 00:44:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:02.614 00:44:36 -- common/autotest_common.sh@10 -- # set +x 00:25:02.614 [2024-04-27 00:44:36.083184] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:02.614 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:02.614 Zero copy mechanism will not be used. 00:25:02.614 [2024-04-27 00:44:36.083376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136184 ] 00:25:02.872 [2024-04-27 00:44:36.253299] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.130 [2024-04-27 00:44:36.466426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.130 [2024-04-27 00:44:36.640253] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.695 00:44:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:03.695 00:44:36 -- common/autotest_common.sh@850 -- # return 0 00:25:03.695 00:44:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:03.695 00:44:36 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:03.695 00:44:36 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:03.695 BaseBdev1 00:25:03.953 00:44:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:03.953 00:44:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:03.953 00:44:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:03.953 BaseBdev2 00:25:04.211 00:44:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:04.211 00:44:37 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:04.211 00:44:37 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:04.469 BaseBdev3 00:25:04.469 00:44:37 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:04.727 spare_malloc 00:25:04.727 00:44:38 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:04.727 spare_delay 00:25:04.986 00:44:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:04.986 [2024-04-27 00:44:38.519234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:04.986 [2024-04-27 00:44:38.519355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.986 [2024-04-27 00:44:38.519393] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:04.986 [2024-04-27 00:44:38.519439] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.986 [2024-04-27 00:44:38.521717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.986 [2024-04-27 00:44:38.521769] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:04.986 spare 00:25:04.986 00:44:38 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:25:05.243 [2024-04-27 00:44:38.779339] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.244 [2024-04-27 00:44:38.781237] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.244 [2024-04-27 00:44:38.781292] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:05.244 [2024-04-27 00:44:38.781381] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:25:05.244 [2024-04-27 00:44:38.781394] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:05.244 [2024-04-27 00:44:38.781546] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:05.244 [2024-04-27 00:44:38.786053] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:25:05.244 [2024-04-27 00:44:38.786078] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:25:05.244 [2024-04-27 00:44:38.786337] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.244 00:44:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.502 00:44:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.502 "name": "raid_bdev1", 00:25:05.502 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:05.502 "strip_size_kb": 64, 00:25:05.502 "state": "online", 00:25:05.502 "raid_level": "raid5f", 00:25:05.502 "superblock": false, 00:25:05.502 "num_base_bdevs": 3, 00:25:05.502 "num_base_bdevs_discovered": 3, 00:25:05.502 "num_base_bdevs_operational": 3, 00:25:05.502 "base_bdevs_list": [ 00:25:05.502 { 00:25:05.502 "name": "BaseBdev1", 00:25:05.502 "uuid": "e60e2c59-6d60-420b-b871-fae41ce02ee8", 00:25:05.502 "is_configured": true, 00:25:05.502 "data_offset": 0, 00:25:05.502 "data_size": 65536 00:25:05.502 }, 00:25:05.502 { 00:25:05.502 "name": "BaseBdev2", 00:25:05.502 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:05.502 "is_configured": true, 00:25:05.502 "data_offset": 0, 00:25:05.502 "data_size": 65536 00:25:05.502 }, 00:25:05.502 { 00:25:05.502 "name": "BaseBdev3", 00:25:05.502 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:05.502 "is_configured": true, 00:25:05.502 "data_offset": 0, 00:25:05.502 "data_size": 65536 00:25:05.502 } 00:25:05.502 ] 00:25:05.502 }' 00:25:05.502 00:44:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.502 00:44:39 -- common/autotest_common.sh@10 -- # set +x 00:25:06.069 00:44:39 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:06.069 00:44:39 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:06.328 [2024-04-27 00:44:39.879880] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:06.328 00:44:39 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:25:06.328 00:44:39 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.328 00:44:39 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:06.893 00:44:40 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:06.893 00:44:40 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:06.893 00:44:40 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:06.893 00:44:40 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@12 -- # local i 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:06.893 [2024-04-27 00:44:40.423939] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:06.893 /dev/nbd0 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:06.893 00:44:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:06.893 00:44:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:06.893 00:44:40 -- common/autotest_common.sh@855 -- # local i 00:25:06.893 00:44:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:06.893 00:44:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:06.893 00:44:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:07.151 00:44:40 -- common/autotest_common.sh@859 -- # break 00:25:07.151 00:44:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:07.151 00:44:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:07.151 00:44:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:07.151 1+0 records in 00:25:07.151 1+0 records out 00:25:07.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246034 s, 16.6 MB/s 00:25:07.151 00:44:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:07.151 00:44:40 -- common/autotest_common.sh@872 -- # size=4096 00:25:07.151 00:44:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:07.151 00:44:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:07.151 00:44:40 -- common/autotest_common.sh@875 -- # return 0 00:25:07.151 00:44:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:07.151 00:44:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:07.151 00:44:40 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:07.151 00:44:40 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:25:07.151 00:44:40 -- bdev/bdev_raid.sh@582 -- # echo 128 00:25:07.151 00:44:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:25:07.410 512+0 records in 00:25:07.410 512+0 records out 00:25:07.410 67108864 bytes (67 MB, 64 MiB) copied, 0.498638 s, 135 MB/s 00:25:07.410 00:44:40 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:07.410 00:44:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:07.410 00:44:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:07.410 00:44:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:07.410 00:44:40 -- bdev/nbd_common.sh@51 -- # local i 00:25:07.410 00:44:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:07.410 00:44:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:07.978 [2024-04-27 00:44:41.268374] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@41 -- # break 00:25:07.978 00:44:41 -- bdev/nbd_common.sh@45 -- # return 0 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:07.978 [2024-04-27 00:44:41.490189] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.978 00:44:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.236 00:44:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:08.236 "name": "raid_bdev1", 00:25:08.236 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:08.236 "strip_size_kb": 64, 00:25:08.236 "state": "online", 00:25:08.236 "raid_level": "raid5f", 00:25:08.236 "superblock": false, 00:25:08.236 "num_base_bdevs": 3, 00:25:08.236 "num_base_bdevs_discovered": 2, 00:25:08.236 "num_base_bdevs_operational": 2, 00:25:08.236 "base_bdevs_list": [ 00:25:08.236 { 00:25:08.236 "name": null, 00:25:08.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.236 "is_configured": false, 00:25:08.236 "data_offset": 0, 00:25:08.236 "data_size": 65536 00:25:08.236 }, 00:25:08.236 { 00:25:08.237 "name": "BaseBdev2", 00:25:08.237 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:08.237 "is_configured": true, 00:25:08.237 "data_offset": 0, 00:25:08.237 "data_size": 65536 00:25:08.237 }, 00:25:08.237 { 00:25:08.237 "name": "BaseBdev3", 00:25:08.237 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:08.237 "is_configured": true, 00:25:08.237 "data_offset": 0, 00:25:08.237 "data_size": 65536 00:25:08.237 } 00:25:08.237 ] 00:25:08.237 }' 00:25:08.237 00:44:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:08.237 00:44:41 -- common/autotest_common.sh@10 -- # set +x 00:25:08.802 00:44:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:09.060 [2024-04-27 00:44:42.618518] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:09.060 [2024-04-27 00:44:42.618652] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:09.060 [2024-04-27 00:44:42.631192] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:25:09.060 [2024-04-27 00:44:42.637493] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:09.318 00:44:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:10.266 00:44:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:10.266 00:44:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:10.266 00:44:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:10.266 00:44:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:10.266 00:44:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:10.266 00:44:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.266 00:44:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.525 00:44:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:10.525 "name": "raid_bdev1", 00:25:10.525 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:10.525 "strip_size_kb": 64, 00:25:10.525 "state": "online", 00:25:10.525 "raid_level": "raid5f", 00:25:10.525 "superblock": false, 00:25:10.525 "num_base_bdevs": 3, 00:25:10.525 "num_base_bdevs_discovered": 3, 00:25:10.525 "num_base_bdevs_operational": 3, 00:25:10.525 "process": { 00:25:10.525 "type": "rebuild", 00:25:10.525 "target": "spare", 00:25:10.525 "progress": { 00:25:10.525 "blocks": 24576, 00:25:10.525 "percent": 18 00:25:10.525 } 00:25:10.525 }, 00:25:10.525 "base_bdevs_list": [ 00:25:10.525 { 00:25:10.525 "name": "spare", 00:25:10.525 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:10.525 "is_configured": true, 00:25:10.525 "data_offset": 0, 00:25:10.525 "data_size": 65536 00:25:10.525 }, 00:25:10.525 { 00:25:10.525 "name": "BaseBdev2", 00:25:10.525 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:10.525 "is_configured": true, 00:25:10.525 "data_offset": 0, 00:25:10.525 "data_size": 65536 00:25:10.525 }, 00:25:10.525 { 00:25:10.525 "name": "BaseBdev3", 00:25:10.525 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:10.525 "is_configured": true, 00:25:10.525 "data_offset": 0, 00:25:10.525 "data_size": 65536 00:25:10.525 } 00:25:10.525 ] 00:25:10.525 }' 00:25:10.525 00:44:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:10.525 00:44:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:10.525 00:44:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:10.525 00:44:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:10.525 00:44:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:10.784 [2024-04-27 00:44:44.179704] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:10.784 [2024-04-27 00:44:44.252892] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:10.784 [2024-04-27 00:44:44.253730] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.784 00:44:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.785 00:44:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.785 00:44:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.785 00:44:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.785 00:44:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.044 00:44:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:11.044 "name": "raid_bdev1", 00:25:11.044 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:11.044 "strip_size_kb": 64, 00:25:11.044 "state": "online", 00:25:11.044 "raid_level": "raid5f", 00:25:11.044 "superblock": false, 00:25:11.044 "num_base_bdevs": 3, 00:25:11.044 "num_base_bdevs_discovered": 2, 00:25:11.044 "num_base_bdevs_operational": 2, 00:25:11.044 "base_bdevs_list": [ 00:25:11.044 { 00:25:11.044 "name": null, 00:25:11.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.044 "is_configured": false, 00:25:11.044 "data_offset": 0, 00:25:11.044 "data_size": 65536 00:25:11.044 }, 00:25:11.044 { 00:25:11.044 "name": "BaseBdev2", 00:25:11.044 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:11.044 "is_configured": true, 00:25:11.044 "data_offset": 0, 00:25:11.044 "data_size": 65536 00:25:11.044 }, 00:25:11.044 { 00:25:11.044 "name": "BaseBdev3", 00:25:11.044 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:11.044 "is_configured": true, 00:25:11.044 "data_offset": 0, 00:25:11.044 "data_size": 65536 00:25:11.044 } 00:25:11.044 ] 00:25:11.044 }' 00:25:11.044 00:44:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:11.044 00:44:44 -- common/autotest_common.sh@10 -- # set +x 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.981 00:44:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:11.981 "name": "raid_bdev1", 00:25:11.981 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:11.981 "strip_size_kb": 64, 00:25:11.981 "state": "online", 00:25:11.981 "raid_level": "raid5f", 00:25:11.981 "superblock": false, 00:25:11.981 "num_base_bdevs": 3, 00:25:11.981 "num_base_bdevs_discovered": 2, 00:25:11.981 "num_base_bdevs_operational": 2, 00:25:11.981 "base_bdevs_list": [ 00:25:11.981 { 00:25:11.981 "name": null, 00:25:11.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.981 "is_configured": false, 00:25:11.981 "data_offset": 0, 00:25:11.981 "data_size": 65536 00:25:11.981 }, 00:25:11.981 { 00:25:11.981 "name": "BaseBdev2", 00:25:11.981 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:11.981 "is_configured": true, 00:25:11.981 "data_offset": 0, 00:25:11.981 "data_size": 65536 00:25:11.981 }, 00:25:11.981 { 00:25:11.981 "name": "BaseBdev3", 00:25:11.981 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:11.981 "is_configured": true, 00:25:11.981 "data_offset": 0, 00:25:11.981 "data_size": 65536 00:25:11.982 } 00:25:11.982 ] 00:25:11.982 }' 00:25:11.982 00:44:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:11.982 00:44:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:11.982 00:44:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:11.982 00:44:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:11.982 00:44:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:12.241 [2024-04-27 00:44:45.776621] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:12.241 [2024-04-27 00:44:45.776690] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:12.241 [2024-04-27 00:44:45.788315] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:25:12.241 [2024-04-27 00:44:45.794448] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:12.241 00:44:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:13.619 00:44:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.619 00:44:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:13.619 00:44:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:13.619 00:44:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:13.619 00:44:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:13.619 00:44:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.619 00:44:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.619 "name": "raid_bdev1", 00:25:13.619 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:13.619 "strip_size_kb": 64, 00:25:13.619 "state": "online", 00:25:13.619 "raid_level": "raid5f", 00:25:13.619 "superblock": false, 00:25:13.619 "num_base_bdevs": 3, 00:25:13.619 "num_base_bdevs_discovered": 3, 00:25:13.619 "num_base_bdevs_operational": 3, 00:25:13.619 "process": { 00:25:13.619 "type": "rebuild", 00:25:13.619 "target": "spare", 00:25:13.619 "progress": { 00:25:13.619 "blocks": 24576, 00:25:13.619 "percent": 18 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 "base_bdevs_list": [ 00:25:13.619 { 00:25:13.619 "name": "spare", 00:25:13.619 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:13.619 "is_configured": true, 00:25:13.619 "data_offset": 0, 00:25:13.619 "data_size": 65536 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "name": "BaseBdev2", 00:25:13.619 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:13.619 "is_configured": true, 00:25:13.619 "data_offset": 0, 00:25:13.619 "data_size": 65536 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "name": "BaseBdev3", 00:25:13.619 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:13.619 "is_configured": true, 00:25:13.619 "data_offset": 0, 00:25:13.619 "data_size": 65536 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }' 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@657 -- # local timeout=637 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.619 00:44:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.878 00:44:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:13.878 "name": "raid_bdev1", 00:25:13.878 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:13.878 "strip_size_kb": 64, 00:25:13.878 "state": "online", 00:25:13.878 "raid_level": "raid5f", 00:25:13.878 "superblock": false, 00:25:13.878 "num_base_bdevs": 3, 00:25:13.878 "num_base_bdevs_discovered": 3, 00:25:13.878 "num_base_bdevs_operational": 3, 00:25:13.878 "process": { 00:25:13.878 "type": "rebuild", 00:25:13.878 "target": "spare", 00:25:13.878 "progress": { 00:25:13.878 "blocks": 32768, 00:25:13.878 "percent": 25 00:25:13.878 } 00:25:13.878 }, 00:25:13.878 "base_bdevs_list": [ 00:25:13.878 { 00:25:13.878 "name": "spare", 00:25:13.878 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:13.878 "is_configured": true, 00:25:13.878 "data_offset": 0, 00:25:13.878 "data_size": 65536 00:25:13.878 }, 00:25:13.878 { 00:25:13.878 "name": "BaseBdev2", 00:25:13.878 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:13.878 "is_configured": true, 00:25:13.878 "data_offset": 0, 00:25:13.878 "data_size": 65536 00:25:13.878 }, 00:25:13.878 { 00:25:13.878 "name": "BaseBdev3", 00:25:13.878 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:13.878 "is_configured": true, 00:25:13.878 "data_offset": 0, 00:25:13.878 "data_size": 65536 00:25:13.878 } 00:25:13.878 ] 00:25:13.878 }' 00:25:13.878 00:44:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:14.137 00:44:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:14.137 00:44:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:14.137 00:44:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:14.137 00:44:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.072 00:44:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.331 00:44:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.331 "name": "raid_bdev1", 00:25:15.331 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:15.331 "strip_size_kb": 64, 00:25:15.331 "state": "online", 00:25:15.331 "raid_level": "raid5f", 00:25:15.331 "superblock": false, 00:25:15.331 "num_base_bdevs": 3, 00:25:15.331 "num_base_bdevs_discovered": 3, 00:25:15.331 "num_base_bdevs_operational": 3, 00:25:15.331 "process": { 00:25:15.331 "type": "rebuild", 00:25:15.331 "target": "spare", 00:25:15.331 "progress": { 00:25:15.331 "blocks": 59392, 00:25:15.331 "percent": 45 00:25:15.331 } 00:25:15.331 }, 00:25:15.331 "base_bdevs_list": [ 00:25:15.331 { 00:25:15.331 "name": "spare", 00:25:15.331 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:15.331 "is_configured": true, 00:25:15.331 "data_offset": 0, 00:25:15.331 "data_size": 65536 00:25:15.331 }, 00:25:15.331 { 00:25:15.331 "name": "BaseBdev2", 00:25:15.331 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:15.331 "is_configured": true, 00:25:15.331 "data_offset": 0, 00:25:15.331 "data_size": 65536 00:25:15.331 }, 00:25:15.331 { 00:25:15.331 "name": "BaseBdev3", 00:25:15.331 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:15.331 "is_configured": true, 00:25:15.331 "data_offset": 0, 00:25:15.331 "data_size": 65536 00:25:15.331 } 00:25:15.331 ] 00:25:15.331 }' 00:25:15.331 00:44:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.331 00:44:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.331 00:44:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:15.331 00:44:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.331 00:44:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.709 00:44:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.709 00:44:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.709 "name": "raid_bdev1", 00:25:16.709 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:16.709 "strip_size_kb": 64, 00:25:16.709 "state": "online", 00:25:16.709 "raid_level": "raid5f", 00:25:16.709 "superblock": false, 00:25:16.709 "num_base_bdevs": 3, 00:25:16.709 "num_base_bdevs_discovered": 3, 00:25:16.709 "num_base_bdevs_operational": 3, 00:25:16.709 "process": { 00:25:16.709 "type": "rebuild", 00:25:16.709 "target": "spare", 00:25:16.709 "progress": { 00:25:16.709 "blocks": 86016, 00:25:16.709 "percent": 65 00:25:16.709 } 00:25:16.709 }, 00:25:16.709 "base_bdevs_list": [ 00:25:16.709 { 00:25:16.709 "name": "spare", 00:25:16.709 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:16.709 "is_configured": true, 00:25:16.709 "data_offset": 0, 00:25:16.709 "data_size": 65536 00:25:16.709 }, 00:25:16.709 { 00:25:16.709 "name": "BaseBdev2", 00:25:16.709 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:16.709 "is_configured": true, 00:25:16.709 "data_offset": 0, 00:25:16.709 "data_size": 65536 00:25:16.709 }, 00:25:16.709 { 00:25:16.709 "name": "BaseBdev3", 00:25:16.709 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:16.709 "is_configured": true, 00:25:16.709 "data_offset": 0, 00:25:16.709 "data_size": 65536 00:25:16.709 } 00:25:16.709 ] 00:25:16.709 }' 00:25:16.709 00:44:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.709 00:44:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.709 00:44:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.709 00:44:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.709 00:44:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:18.084 "name": "raid_bdev1", 00:25:18.084 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:18.084 "strip_size_kb": 64, 00:25:18.084 "state": "online", 00:25:18.084 "raid_level": "raid5f", 00:25:18.084 "superblock": false, 00:25:18.084 "num_base_bdevs": 3, 00:25:18.084 "num_base_bdevs_discovered": 3, 00:25:18.084 "num_base_bdevs_operational": 3, 00:25:18.084 "process": { 00:25:18.084 "type": "rebuild", 00:25:18.084 "target": "spare", 00:25:18.084 "progress": { 00:25:18.084 "blocks": 114688, 00:25:18.084 "percent": 87 00:25:18.084 } 00:25:18.084 }, 00:25:18.084 "base_bdevs_list": [ 00:25:18.084 { 00:25:18.084 "name": "spare", 00:25:18.084 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:18.084 "is_configured": true, 00:25:18.084 "data_offset": 0, 00:25:18.084 "data_size": 65536 00:25:18.084 }, 00:25:18.084 { 00:25:18.084 "name": "BaseBdev2", 00:25:18.084 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:18.084 "is_configured": true, 00:25:18.084 "data_offset": 0, 00:25:18.084 "data_size": 65536 00:25:18.084 }, 00:25:18.084 { 00:25:18.084 "name": "BaseBdev3", 00:25:18.084 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:18.084 "is_configured": true, 00:25:18.084 "data_offset": 0, 00:25:18.084 "data_size": 65536 00:25:18.084 } 00:25:18.084 ] 00:25:18.084 }' 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.084 00:44:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:19.028 [2024-04-27 00:44:52.252276] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:19.028 [2024-04-27 00:44:52.252386] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:19.028 [2024-04-27 00:44:52.252510] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.287 00:44:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.545 00:44:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.545 "name": "raid_bdev1", 00:25:19.545 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:19.546 "strip_size_kb": 64, 00:25:19.546 "state": "online", 00:25:19.546 "raid_level": "raid5f", 00:25:19.546 "superblock": false, 00:25:19.546 "num_base_bdevs": 3, 00:25:19.546 "num_base_bdevs_discovered": 3, 00:25:19.546 "num_base_bdevs_operational": 3, 00:25:19.546 "base_bdevs_list": [ 00:25:19.546 { 00:25:19.546 "name": "spare", 00:25:19.546 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:19.546 "is_configured": true, 00:25:19.546 "data_offset": 0, 00:25:19.546 "data_size": 65536 00:25:19.546 }, 00:25:19.546 { 00:25:19.546 "name": "BaseBdev2", 00:25:19.546 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:19.546 "is_configured": true, 00:25:19.546 "data_offset": 0, 00:25:19.546 "data_size": 65536 00:25:19.546 }, 00:25:19.546 { 00:25:19.546 "name": "BaseBdev3", 00:25:19.546 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:19.546 "is_configured": true, 00:25:19.546 "data_offset": 0, 00:25:19.546 "data_size": 65536 00:25:19.546 } 00:25:19.546 ] 00:25:19.546 }' 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@660 -- # break 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.546 00:44:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.806 "name": "raid_bdev1", 00:25:19.806 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:19.806 "strip_size_kb": 64, 00:25:19.806 "state": "online", 00:25:19.806 "raid_level": "raid5f", 00:25:19.806 "superblock": false, 00:25:19.806 "num_base_bdevs": 3, 00:25:19.806 "num_base_bdevs_discovered": 3, 00:25:19.806 "num_base_bdevs_operational": 3, 00:25:19.806 "base_bdevs_list": [ 00:25:19.806 { 00:25:19.806 "name": "spare", 00:25:19.806 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:19.806 "is_configured": true, 00:25:19.806 "data_offset": 0, 00:25:19.806 "data_size": 65536 00:25:19.806 }, 00:25:19.806 { 00:25:19.806 "name": "BaseBdev2", 00:25:19.806 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:19.806 "is_configured": true, 00:25:19.806 "data_offset": 0, 00:25:19.806 "data_size": 65536 00:25:19.806 }, 00:25:19.806 { 00:25:19.806 "name": "BaseBdev3", 00:25:19.806 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:19.806 "is_configured": true, 00:25:19.806 "data_offset": 0, 00:25:19.806 "data_size": 65536 00:25:19.806 } 00:25:19.806 ] 00:25:19.806 }' 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.806 00:44:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.065 00:44:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:20.065 "name": "raid_bdev1", 00:25:20.065 "uuid": "77f4884c-dc4e-4585-b2fb-19cb1e0ed0d4", 00:25:20.065 "strip_size_kb": 64, 00:25:20.065 "state": "online", 00:25:20.065 "raid_level": "raid5f", 00:25:20.065 "superblock": false, 00:25:20.065 "num_base_bdevs": 3, 00:25:20.065 "num_base_bdevs_discovered": 3, 00:25:20.065 "num_base_bdevs_operational": 3, 00:25:20.065 "base_bdevs_list": [ 00:25:20.065 { 00:25:20.065 "name": "spare", 00:25:20.065 "uuid": "b89425a4-c46a-5e0b-b976-2af3536ea532", 00:25:20.065 "is_configured": true, 00:25:20.065 "data_offset": 0, 00:25:20.065 "data_size": 65536 00:25:20.065 }, 00:25:20.065 { 00:25:20.065 "name": "BaseBdev2", 00:25:20.065 "uuid": "475d7b69-2e09-4ba1-89a6-57884e658fe0", 00:25:20.065 "is_configured": true, 00:25:20.065 "data_offset": 0, 00:25:20.065 "data_size": 65536 00:25:20.065 }, 00:25:20.065 { 00:25:20.065 "name": "BaseBdev3", 00:25:20.065 "uuid": "d3b8bca2-5a3c-4bcb-831e-fbd66714fda8", 00:25:20.065 "is_configured": true, 00:25:20.065 "data_offset": 0, 00:25:20.065 "data_size": 65536 00:25:20.065 } 00:25:20.065 ] 00:25:20.065 }' 00:25:20.065 00:44:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:20.065 00:44:53 -- common/autotest_common.sh@10 -- # set +x 00:25:20.632 00:44:54 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:20.891 [2024-04-27 00:44:54.411613] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.891 [2024-04-27 00:44:54.411667] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:20.891 [2024-04-27 00:44:54.411774] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:20.891 [2024-04-27 00:44:54.411877] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:20.891 [2024-04-27 00:44:54.411893] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:25:20.891 00:44:54 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.891 00:44:54 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:21.149 00:44:54 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:21.149 00:44:54 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:21.149 00:44:54 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@12 -- # local i 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:21.149 00:44:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:21.408 /dev/nbd0 00:25:21.408 00:44:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:21.408 00:44:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:21.408 00:44:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:21.408 00:44:54 -- common/autotest_common.sh@855 -- # local i 00:25:21.408 00:44:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:21.408 00:44:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:21.408 00:44:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:21.408 00:44:54 -- common/autotest_common.sh@859 -- # break 00:25:21.408 00:44:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:21.408 00:44:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:21.408 00:44:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:21.408 1+0 records in 00:25:21.408 1+0 records out 00:25:21.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470809 s, 8.7 MB/s 00:25:21.408 00:44:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.408 00:44:54 -- common/autotest_common.sh@872 -- # size=4096 00:25:21.408 00:44:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.408 00:44:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:21.408 00:44:54 -- common/autotest_common.sh@875 -- # return 0 00:25:21.408 00:44:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:21.408 00:44:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:21.408 00:44:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:21.667 /dev/nbd1 00:25:21.667 00:44:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:21.667 00:44:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:21.667 00:44:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:21.667 00:44:55 -- common/autotest_common.sh@855 -- # local i 00:25:21.667 00:44:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:21.667 00:44:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:21.667 00:44:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:21.667 00:44:55 -- common/autotest_common.sh@859 -- # break 00:25:21.667 00:44:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:21.667 00:44:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:21.667 00:44:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:21.667 1+0 records in 00:25:21.667 1+0 records out 00:25:21.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610892 s, 6.7 MB/s 00:25:21.667 00:44:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.667 00:44:55 -- common/autotest_common.sh@872 -- # size=4096 00:25:21.667 00:44:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:21.667 00:44:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:21.667 00:44:55 -- common/autotest_common.sh@875 -- # return 0 00:25:21.667 00:44:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:21.667 00:44:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:21.667 00:44:55 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:21.926 00:44:55 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:21.926 00:44:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:21.926 00:44:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:21.926 00:44:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:21.926 00:44:55 -- bdev/nbd_common.sh@51 -- # local i 00:25:21.926 00:44:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:21.926 00:44:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@41 -- # break 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@45 -- # return 0 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:22.184 00:44:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@41 -- # break 00:25:22.443 00:44:55 -- bdev/nbd_common.sh@45 -- # return 0 00:25:22.443 00:44:55 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:22.443 00:44:55 -- bdev/bdev_raid.sh@709 -- # killprocess 136184 00:25:22.443 00:44:55 -- common/autotest_common.sh@936 -- # '[' -z 136184 ']' 00:25:22.443 00:44:55 -- common/autotest_common.sh@940 -- # kill -0 136184 00:25:22.443 00:44:55 -- common/autotest_common.sh@941 -- # uname 00:25:22.443 00:44:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.443 00:44:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136184 00:25:22.443 00:44:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:22.443 00:44:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:22.443 killing process with pid 136184 00:25:22.443 00:44:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136184' 00:25:22.443 00:44:55 -- common/autotest_common.sh@955 -- # kill 136184 00:25:22.443 Received shutdown signal, test time was about 60.000000 seconds 00:25:22.443 00:25:22.443 Latency(us) 00:25:22.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.443 =================================================================================================================== 00:25:22.443 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:22.443 [2024-04-27 00:44:55.940788] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:22.443 00:44:55 -- common/autotest_common.sh@960 -- # wait 136184 00:25:22.701 [2024-04-27 00:44:56.239821] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:24.076 00:25:24.076 real 0m21.314s 00:25:24.076 user 0m31.956s 00:25:24.076 sys 0m2.697s 00:25:24.076 00:44:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:24.076 00:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.076 ************************************ 00:25:24.076 END TEST raid5f_rebuild_test 00:25:24.076 ************************************ 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:25:24.076 00:44:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:25:24.076 00:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:24.076 00:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.076 ************************************ 00:25:24.076 START TEST raid5f_rebuild_test_sb 00:25:24.076 ************************************ 00:25:24.076 00:44:57 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 3 true false 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@544 -- # raid_pid=136734 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:24.076 00:44:57 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136734 /var/tmp/spdk-raid.sock 00:25:24.076 00:44:57 -- common/autotest_common.sh@817 -- # '[' -z 136734 ']' 00:25:24.076 00:44:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:24.076 00:44:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:24.076 00:44:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:24.076 00:44:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:24.076 00:44:57 -- common/autotest_common.sh@10 -- # set +x 00:25:24.076 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:24.076 Zero copy mechanism will not be used. 00:25:24.076 [2024-04-27 00:44:57.488204] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:24.076 [2024-04-27 00:44:57.488381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136734 ] 00:25:24.076 [2024-04-27 00:44:57.657729] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.335 [2024-04-27 00:44:57.866507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.593 [2024-04-27 00:44:58.062539] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.160 00:44:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:25.160 00:44:58 -- common/autotest_common.sh@850 -- # return 0 00:25:25.160 00:44:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:25.160 00:44:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:25.160 00:44:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:25.160 BaseBdev1_malloc 00:25:25.160 00:44:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:25.418 [2024-04-27 00:44:58.985935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:25.418 [2024-04-27 00:44:58.986089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.418 [2024-04-27 00:44:58.986143] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:25.418 [2024-04-27 00:44:58.986200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.418 [2024-04-27 00:44:58.988975] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.418 [2024-04-27 00:44:58.989022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:25.418 BaseBdev1 00:25:25.418 00:44:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:25.418 00:44:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:25.418 00:44:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:25.676 BaseBdev2_malloc 00:25:25.676 00:44:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:25.934 [2024-04-27 00:44:59.445697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:25.934 [2024-04-27 00:44:59.445833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.935 [2024-04-27 00:44:59.445894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:25.935 [2024-04-27 00:44:59.445958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.935 [2024-04-27 00:44:59.448534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.935 [2024-04-27 00:44:59.448593] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:25.935 BaseBdev2 00:25:25.935 00:44:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:25.935 00:44:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:25.935 00:44:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:26.193 BaseBdev3_malloc 00:25:26.193 00:44:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:26.452 [2024-04-27 00:44:59.962978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:26.452 [2024-04-27 00:44:59.963183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.452 [2024-04-27 00:44:59.963240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:26.452 [2024-04-27 00:44:59.963290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.452 [2024-04-27 00:44:59.966009] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.452 [2024-04-27 00:44:59.966068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:26.452 BaseBdev3 00:25:26.452 00:44:59 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:26.710 spare_malloc 00:25:26.710 00:45:00 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:26.969 spare_delay 00:25:26.969 00:45:00 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:27.226 [2024-04-27 00:45:00.657465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:27.226 [2024-04-27 00:45:00.657615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.226 [2024-04-27 00:45:00.657665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:27.226 [2024-04-27 00:45:00.657717] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.226 [2024-04-27 00:45:00.660295] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.226 [2024-04-27 00:45:00.660352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:27.226 spare 00:25:27.226 00:45:00 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:25:27.484 [2024-04-27 00:45:00.905649] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:27.484 [2024-04-27 00:45:00.907922] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:27.484 [2024-04-27 00:45:00.908006] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:27.484 [2024-04-27 00:45:00.908283] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:25:27.484 [2024-04-27 00:45:00.908305] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:27.484 [2024-04-27 00:45:00.908489] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:27.484 [2024-04-27 00:45:00.913012] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:25:27.484 [2024-04-27 00:45:00.913039] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:25:27.484 [2024-04-27 00:45:00.913302] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.484 00:45:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.743 00:45:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.743 "name": "raid_bdev1", 00:25:27.743 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:27.743 "strip_size_kb": 64, 00:25:27.743 "state": "online", 00:25:27.743 "raid_level": "raid5f", 00:25:27.743 "superblock": true, 00:25:27.743 "num_base_bdevs": 3, 00:25:27.743 "num_base_bdevs_discovered": 3, 00:25:27.743 "num_base_bdevs_operational": 3, 00:25:27.743 "base_bdevs_list": [ 00:25:27.743 { 00:25:27.743 "name": "BaseBdev1", 00:25:27.743 "uuid": "e9da61f3-76c8-5108-9a54-2644dc67e316", 00:25:27.743 "is_configured": true, 00:25:27.743 "data_offset": 2048, 00:25:27.743 "data_size": 63488 00:25:27.743 }, 00:25:27.743 { 00:25:27.743 "name": "BaseBdev2", 00:25:27.743 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:27.743 "is_configured": true, 00:25:27.743 "data_offset": 2048, 00:25:27.743 "data_size": 63488 00:25:27.743 }, 00:25:27.743 { 00:25:27.743 "name": "BaseBdev3", 00:25:27.743 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:27.743 "is_configured": true, 00:25:27.743 "data_offset": 2048, 00:25:27.743 "data_size": 63488 00:25:27.743 } 00:25:27.743 ] 00:25:27.743 }' 00:25:27.743 00:45:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.743 00:45:01 -- common/autotest_common.sh@10 -- # set +x 00:25:28.310 00:45:01 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:28.310 00:45:01 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:28.569 [2024-04-27 00:45:02.031014] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:28.569 00:45:02 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:25:28.569 00:45:02 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.569 00:45:02 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:28.827 00:45:02 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:28.827 00:45:02 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:28.827 00:45:02 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:28.827 00:45:02 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@12 -- # local i 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.827 00:45:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:29.086 [2024-04-27 00:45:02.534984] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:29.086 /dev/nbd0 00:25:29.086 00:45:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:29.086 00:45:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:29.086 00:45:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:29.086 00:45:02 -- common/autotest_common.sh@855 -- # local i 00:25:29.086 00:45:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:29.086 00:45:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:29.086 00:45:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:29.086 00:45:02 -- common/autotest_common.sh@859 -- # break 00:25:29.086 00:45:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:29.086 00:45:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:29.086 00:45:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:29.086 1+0 records in 00:25:29.086 1+0 records out 00:25:29.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351468 s, 11.7 MB/s 00:25:29.086 00:45:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.086 00:45:02 -- common/autotest_common.sh@872 -- # size=4096 00:25:29.086 00:45:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:29.086 00:45:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:29.086 00:45:02 -- common/autotest_common.sh@875 -- # return 0 00:25:29.086 00:45:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:29.086 00:45:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:29.086 00:45:02 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:29.086 00:45:02 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:25:29.086 00:45:02 -- bdev/bdev_raid.sh@582 -- # echo 128 00:25:29.086 00:45:02 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:25:29.653 496+0 records in 00:25:29.653 496+0 records out 00:25:29.653 65011712 bytes (65 MB, 62 MiB) copied, 0.427155 s, 152 MB/s 00:25:29.653 00:45:03 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:29.653 00:45:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:29.653 00:45:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:29.653 00:45:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:29.653 00:45:03 -- bdev/nbd_common.sh@51 -- # local i 00:25:29.653 00:45:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:29.653 00:45:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:29.911 [2024-04-27 00:45:03.278031] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@41 -- # break 00:25:29.911 00:45:03 -- bdev/nbd_common.sh@45 -- # return 0 00:25:29.911 00:45:03 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:30.169 [2024-04-27 00:45:03.535770] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:30.169 00:45:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.428 00:45:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:30.428 "name": "raid_bdev1", 00:25:30.428 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:30.428 "strip_size_kb": 64, 00:25:30.428 "state": "online", 00:25:30.428 "raid_level": "raid5f", 00:25:30.428 "superblock": true, 00:25:30.428 "num_base_bdevs": 3, 00:25:30.428 "num_base_bdevs_discovered": 2, 00:25:30.428 "num_base_bdevs_operational": 2, 00:25:30.428 "base_bdevs_list": [ 00:25:30.428 { 00:25:30.428 "name": null, 00:25:30.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.428 "is_configured": false, 00:25:30.428 "data_offset": 2048, 00:25:30.428 "data_size": 63488 00:25:30.428 }, 00:25:30.428 { 00:25:30.428 "name": "BaseBdev2", 00:25:30.428 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:30.428 "is_configured": true, 00:25:30.428 "data_offset": 2048, 00:25:30.428 "data_size": 63488 00:25:30.428 }, 00:25:30.428 { 00:25:30.428 "name": "BaseBdev3", 00:25:30.428 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:30.428 "is_configured": true, 00:25:30.428 "data_offset": 2048, 00:25:30.428 "data_size": 63488 00:25:30.428 } 00:25:30.428 ] 00:25:30.428 }' 00:25:30.428 00:45:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:30.428 00:45:03 -- common/autotest_common.sh@10 -- # set +x 00:25:30.994 00:45:04 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:31.252 [2024-04-27 00:45:04.628136] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:31.252 [2024-04-27 00:45:04.628226] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:31.252 [2024-04-27 00:45:04.642373] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:25:31.252 [2024-04-27 00:45:04.649118] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:31.252 00:45:04 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:32.187 00:45:05 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.187 00:45:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:32.187 00:45:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:32.187 00:45:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:32.187 00:45:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:32.187 00:45:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.188 00:45:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.446 00:45:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:32.446 "name": "raid_bdev1", 00:25:32.446 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:32.446 "strip_size_kb": 64, 00:25:32.446 "state": "online", 00:25:32.446 "raid_level": "raid5f", 00:25:32.446 "superblock": true, 00:25:32.446 "num_base_bdevs": 3, 00:25:32.446 "num_base_bdevs_discovered": 3, 00:25:32.446 "num_base_bdevs_operational": 3, 00:25:32.446 "process": { 00:25:32.446 "type": "rebuild", 00:25:32.446 "target": "spare", 00:25:32.446 "progress": { 00:25:32.446 "blocks": 24576, 00:25:32.446 "percent": 19 00:25:32.446 } 00:25:32.446 }, 00:25:32.446 "base_bdevs_list": [ 00:25:32.446 { 00:25:32.446 "name": "spare", 00:25:32.446 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:32.446 "is_configured": true, 00:25:32.446 "data_offset": 2048, 00:25:32.446 "data_size": 63488 00:25:32.446 }, 00:25:32.446 { 00:25:32.446 "name": "BaseBdev2", 00:25:32.446 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:32.446 "is_configured": true, 00:25:32.446 "data_offset": 2048, 00:25:32.446 "data_size": 63488 00:25:32.446 }, 00:25:32.446 { 00:25:32.446 "name": "BaseBdev3", 00:25:32.446 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:32.446 "is_configured": true, 00:25:32.446 "data_offset": 2048, 00:25:32.446 "data_size": 63488 00:25:32.446 } 00:25:32.446 ] 00:25:32.446 }' 00:25:32.446 00:45:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:32.446 00:45:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.446 00:45:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:32.705 00:45:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.705 00:45:06 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:32.705 [2024-04-27 00:45:06.275376] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:32.963 [2024-04-27 00:45:06.366417] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:32.963 [2024-04-27 00:45:06.366547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.963 00:45:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.222 00:45:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:33.222 "name": "raid_bdev1", 00:25:33.222 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:33.222 "strip_size_kb": 64, 00:25:33.222 "state": "online", 00:25:33.222 "raid_level": "raid5f", 00:25:33.222 "superblock": true, 00:25:33.222 "num_base_bdevs": 3, 00:25:33.222 "num_base_bdevs_discovered": 2, 00:25:33.222 "num_base_bdevs_operational": 2, 00:25:33.222 "base_bdevs_list": [ 00:25:33.222 { 00:25:33.222 "name": null, 00:25:33.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.222 "is_configured": false, 00:25:33.222 "data_offset": 2048, 00:25:33.222 "data_size": 63488 00:25:33.222 }, 00:25:33.222 { 00:25:33.222 "name": "BaseBdev2", 00:25:33.222 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:33.222 "is_configured": true, 00:25:33.222 "data_offset": 2048, 00:25:33.222 "data_size": 63488 00:25:33.222 }, 00:25:33.222 { 00:25:33.222 "name": "BaseBdev3", 00:25:33.222 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:33.222 "is_configured": true, 00:25:33.222 "data_offset": 2048, 00:25:33.222 "data_size": 63488 00:25:33.222 } 00:25:33.222 ] 00:25:33.222 }' 00:25:33.222 00:45:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:33.222 00:45:06 -- common/autotest_common.sh@10 -- # set +x 00:25:33.789 00:45:07 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:33.790 00:45:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.790 00:45:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:33.790 00:45:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:33.790 00:45:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.790 00:45:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.790 00:45:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.048 00:45:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:34.048 "name": "raid_bdev1", 00:25:34.048 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:34.048 "strip_size_kb": 64, 00:25:34.048 "state": "online", 00:25:34.048 "raid_level": "raid5f", 00:25:34.048 "superblock": true, 00:25:34.048 "num_base_bdevs": 3, 00:25:34.048 "num_base_bdevs_discovered": 2, 00:25:34.048 "num_base_bdevs_operational": 2, 00:25:34.048 "base_bdevs_list": [ 00:25:34.048 { 00:25:34.048 "name": null, 00:25:34.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.048 "is_configured": false, 00:25:34.048 "data_offset": 2048, 00:25:34.048 "data_size": 63488 00:25:34.048 }, 00:25:34.048 { 00:25:34.048 "name": "BaseBdev2", 00:25:34.048 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:34.048 "is_configured": true, 00:25:34.048 "data_offset": 2048, 00:25:34.048 "data_size": 63488 00:25:34.048 }, 00:25:34.048 { 00:25:34.048 "name": "BaseBdev3", 00:25:34.048 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:34.048 "is_configured": true, 00:25:34.048 "data_offset": 2048, 00:25:34.048 "data_size": 63488 00:25:34.048 } 00:25:34.048 ] 00:25:34.048 }' 00:25:34.048 00:45:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:34.048 00:45:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:34.048 00:45:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:34.048 00:45:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:34.048 00:45:07 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:34.307 [2024-04-27 00:45:07.783328] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:34.307 [2024-04-27 00:45:07.783397] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:34.307 [2024-04-27 00:45:07.795897] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:25:34.307 [2024-04-27 00:45:07.802421] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:34.307 00:45:07 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:35.254 00:45:08 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.254 00:45:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.254 00:45:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:35.254 00:45:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:35.254 00:45:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.254 00:45:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.254 00:45:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.520 00:45:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.520 "name": "raid_bdev1", 00:25:35.520 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:35.520 "strip_size_kb": 64, 00:25:35.520 "state": "online", 00:25:35.520 "raid_level": "raid5f", 00:25:35.520 "superblock": true, 00:25:35.520 "num_base_bdevs": 3, 00:25:35.520 "num_base_bdevs_discovered": 3, 00:25:35.520 "num_base_bdevs_operational": 3, 00:25:35.520 "process": { 00:25:35.520 "type": "rebuild", 00:25:35.520 "target": "spare", 00:25:35.520 "progress": { 00:25:35.520 "blocks": 24576, 00:25:35.520 "percent": 19 00:25:35.520 } 00:25:35.520 }, 00:25:35.520 "base_bdevs_list": [ 00:25:35.520 { 00:25:35.520 "name": "spare", 00:25:35.520 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:35.520 "is_configured": true, 00:25:35.520 "data_offset": 2048, 00:25:35.520 "data_size": 63488 00:25:35.520 }, 00:25:35.520 { 00:25:35.520 "name": "BaseBdev2", 00:25:35.520 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:35.520 "is_configured": true, 00:25:35.520 "data_offset": 2048, 00:25:35.520 "data_size": 63488 00:25:35.520 }, 00:25:35.520 { 00:25:35.520 "name": "BaseBdev3", 00:25:35.520 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:35.520 "is_configured": true, 00:25:35.520 "data_offset": 2048, 00:25:35.520 "data_size": 63488 00:25:35.520 } 00:25:35.520 ] 00:25:35.520 }' 00:25:35.520 00:45:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:35.779 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@657 -- # local timeout=659 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.779 00:45:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.038 00:45:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.038 "name": "raid_bdev1", 00:25:36.038 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:36.038 "strip_size_kb": 64, 00:25:36.038 "state": "online", 00:25:36.038 "raid_level": "raid5f", 00:25:36.038 "superblock": true, 00:25:36.038 "num_base_bdevs": 3, 00:25:36.038 "num_base_bdevs_discovered": 3, 00:25:36.038 "num_base_bdevs_operational": 3, 00:25:36.038 "process": { 00:25:36.038 "type": "rebuild", 00:25:36.038 "target": "spare", 00:25:36.038 "progress": { 00:25:36.038 "blocks": 32768, 00:25:36.038 "percent": 25 00:25:36.038 } 00:25:36.038 }, 00:25:36.038 "base_bdevs_list": [ 00:25:36.038 { 00:25:36.038 "name": "spare", 00:25:36.038 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:36.038 "is_configured": true, 00:25:36.038 "data_offset": 2048, 00:25:36.038 "data_size": 63488 00:25:36.038 }, 00:25:36.038 { 00:25:36.038 "name": "BaseBdev2", 00:25:36.038 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:36.038 "is_configured": true, 00:25:36.038 "data_offset": 2048, 00:25:36.038 "data_size": 63488 00:25:36.038 }, 00:25:36.038 { 00:25:36.038 "name": "BaseBdev3", 00:25:36.038 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:36.038 "is_configured": true, 00:25:36.038 "data_offset": 2048, 00:25:36.039 "data_size": 63488 00:25:36.039 } 00:25:36.039 ] 00:25:36.039 }' 00:25:36.039 00:45:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.039 00:45:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.039 00:45:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.039 00:45:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.039 00:45:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.974 00:45:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.234 00:45:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:37.234 "name": "raid_bdev1", 00:25:37.234 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:37.234 "strip_size_kb": 64, 00:25:37.234 "state": "online", 00:25:37.234 "raid_level": "raid5f", 00:25:37.234 "superblock": true, 00:25:37.234 "num_base_bdevs": 3, 00:25:37.234 "num_base_bdevs_discovered": 3, 00:25:37.234 "num_base_bdevs_operational": 3, 00:25:37.234 "process": { 00:25:37.234 "type": "rebuild", 00:25:37.234 "target": "spare", 00:25:37.234 "progress": { 00:25:37.234 "blocks": 59392, 00:25:37.234 "percent": 46 00:25:37.234 } 00:25:37.234 }, 00:25:37.234 "base_bdevs_list": [ 00:25:37.234 { 00:25:37.234 "name": "spare", 00:25:37.234 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:37.234 "is_configured": true, 00:25:37.234 "data_offset": 2048, 00:25:37.234 "data_size": 63488 00:25:37.234 }, 00:25:37.234 { 00:25:37.234 "name": "BaseBdev2", 00:25:37.234 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:37.234 "is_configured": true, 00:25:37.234 "data_offset": 2048, 00:25:37.234 "data_size": 63488 00:25:37.234 }, 00:25:37.234 { 00:25:37.234 "name": "BaseBdev3", 00:25:37.234 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:37.234 "is_configured": true, 00:25:37.234 "data_offset": 2048, 00:25:37.234 "data_size": 63488 00:25:37.234 } 00:25:37.234 ] 00:25:37.234 }' 00:25:37.234 00:45:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:37.493 00:45:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.493 00:45:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:37.493 00:45:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.493 00:45:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.427 00:45:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.685 00:45:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:38.685 "name": "raid_bdev1", 00:25:38.685 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:38.685 "strip_size_kb": 64, 00:25:38.685 "state": "online", 00:25:38.685 "raid_level": "raid5f", 00:25:38.685 "superblock": true, 00:25:38.685 "num_base_bdevs": 3, 00:25:38.685 "num_base_bdevs_discovered": 3, 00:25:38.685 "num_base_bdevs_operational": 3, 00:25:38.685 "process": { 00:25:38.685 "type": "rebuild", 00:25:38.685 "target": "spare", 00:25:38.685 "progress": { 00:25:38.685 "blocks": 88064, 00:25:38.685 "percent": 69 00:25:38.685 } 00:25:38.685 }, 00:25:38.685 "base_bdevs_list": [ 00:25:38.685 { 00:25:38.685 "name": "spare", 00:25:38.685 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:38.685 "is_configured": true, 00:25:38.685 "data_offset": 2048, 00:25:38.685 "data_size": 63488 00:25:38.685 }, 00:25:38.685 { 00:25:38.685 "name": "BaseBdev2", 00:25:38.685 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:38.685 "is_configured": true, 00:25:38.685 "data_offset": 2048, 00:25:38.685 "data_size": 63488 00:25:38.685 }, 00:25:38.685 { 00:25:38.685 "name": "BaseBdev3", 00:25:38.685 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:38.685 "is_configured": true, 00:25:38.685 "data_offset": 2048, 00:25:38.685 "data_size": 63488 00:25:38.685 } 00:25:38.685 ] 00:25:38.685 }' 00:25:38.685 00:45:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:38.685 00:45:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.685 00:45:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:38.944 00:45:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.944 00:45:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.879 00:45:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.138 00:45:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:40.138 "name": "raid_bdev1", 00:25:40.138 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:40.138 "strip_size_kb": 64, 00:25:40.138 "state": "online", 00:25:40.138 "raid_level": "raid5f", 00:25:40.138 "superblock": true, 00:25:40.138 "num_base_bdevs": 3, 00:25:40.138 "num_base_bdevs_discovered": 3, 00:25:40.138 "num_base_bdevs_operational": 3, 00:25:40.138 "process": { 00:25:40.138 "type": "rebuild", 00:25:40.138 "target": "spare", 00:25:40.138 "progress": { 00:25:40.138 "blocks": 114688, 00:25:40.138 "percent": 90 00:25:40.138 } 00:25:40.138 }, 00:25:40.138 "base_bdevs_list": [ 00:25:40.138 { 00:25:40.138 "name": "spare", 00:25:40.138 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:40.138 "is_configured": true, 00:25:40.138 "data_offset": 2048, 00:25:40.138 "data_size": 63488 00:25:40.138 }, 00:25:40.138 { 00:25:40.138 "name": "BaseBdev2", 00:25:40.138 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:40.138 "is_configured": true, 00:25:40.138 "data_offset": 2048, 00:25:40.138 "data_size": 63488 00:25:40.138 }, 00:25:40.138 { 00:25:40.138 "name": "BaseBdev3", 00:25:40.138 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:40.138 "is_configured": true, 00:25:40.138 "data_offset": 2048, 00:25:40.138 "data_size": 63488 00:25:40.138 } 00:25:40.138 ] 00:25:40.138 }' 00:25:40.138 00:45:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:40.138 00:45:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.138 00:45:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:40.138 00:45:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.138 00:45:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:40.704 [2024-04-27 00:45:14.062863] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:40.704 [2024-04-27 00:45:14.062962] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:40.704 [2024-04-27 00:45:14.063147] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.271 00:45:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.530 00:45:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:41.530 "name": "raid_bdev1", 00:25:41.530 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:41.530 "strip_size_kb": 64, 00:25:41.530 "state": "online", 00:25:41.530 "raid_level": "raid5f", 00:25:41.530 "superblock": true, 00:25:41.530 "num_base_bdevs": 3, 00:25:41.530 "num_base_bdevs_discovered": 3, 00:25:41.530 "num_base_bdevs_operational": 3, 00:25:41.530 "base_bdevs_list": [ 00:25:41.530 { 00:25:41.530 "name": "spare", 00:25:41.530 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:41.530 "is_configured": true, 00:25:41.530 "data_offset": 2048, 00:25:41.530 "data_size": 63488 00:25:41.530 }, 00:25:41.530 { 00:25:41.530 "name": "BaseBdev2", 00:25:41.530 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:41.530 "is_configured": true, 00:25:41.530 "data_offset": 2048, 00:25:41.530 "data_size": 63488 00:25:41.530 }, 00:25:41.530 { 00:25:41.530 "name": "BaseBdev3", 00:25:41.530 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:41.530 "is_configured": true, 00:25:41.530 "data_offset": 2048, 00:25:41.530 "data_size": 63488 00:25:41.530 } 00:25:41.530 ] 00:25:41.530 }' 00:25:41.530 00:45:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:41.530 00:45:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:41.530 00:45:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@660 -- # break 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.530 00:45:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.789 00:45:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:41.789 "name": "raid_bdev1", 00:25:41.789 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:41.789 "strip_size_kb": 64, 00:25:41.789 "state": "online", 00:25:41.789 "raid_level": "raid5f", 00:25:41.789 "superblock": true, 00:25:41.789 "num_base_bdevs": 3, 00:25:41.789 "num_base_bdevs_discovered": 3, 00:25:41.789 "num_base_bdevs_operational": 3, 00:25:41.789 "base_bdevs_list": [ 00:25:41.789 { 00:25:41.789 "name": "spare", 00:25:41.789 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:41.789 "is_configured": true, 00:25:41.789 "data_offset": 2048, 00:25:41.789 "data_size": 63488 00:25:41.789 }, 00:25:41.789 { 00:25:41.789 "name": "BaseBdev2", 00:25:41.789 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:41.789 "is_configured": true, 00:25:41.789 "data_offset": 2048, 00:25:41.789 "data_size": 63488 00:25:41.789 }, 00:25:41.789 { 00:25:41.789 "name": "BaseBdev3", 00:25:41.789 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:41.789 "is_configured": true, 00:25:41.789 "data_offset": 2048, 00:25:41.789 "data_size": 63488 00:25:41.789 } 00:25:41.789 ] 00:25:41.789 }' 00:25:41.789 00:45:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:41.789 00:45:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:41.789 00:45:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.048 00:45:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.307 00:45:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:42.307 "name": "raid_bdev1", 00:25:42.307 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:42.307 "strip_size_kb": 64, 00:25:42.307 "state": "online", 00:25:42.307 "raid_level": "raid5f", 00:25:42.307 "superblock": true, 00:25:42.307 "num_base_bdevs": 3, 00:25:42.307 "num_base_bdevs_discovered": 3, 00:25:42.307 "num_base_bdevs_operational": 3, 00:25:42.307 "base_bdevs_list": [ 00:25:42.307 { 00:25:42.308 "name": "spare", 00:25:42.308 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:42.308 "is_configured": true, 00:25:42.308 "data_offset": 2048, 00:25:42.308 "data_size": 63488 00:25:42.308 }, 00:25:42.308 { 00:25:42.308 "name": "BaseBdev2", 00:25:42.308 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:42.308 "is_configured": true, 00:25:42.308 "data_offset": 2048, 00:25:42.308 "data_size": 63488 00:25:42.308 }, 00:25:42.308 { 00:25:42.308 "name": "BaseBdev3", 00:25:42.308 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:42.308 "is_configured": true, 00:25:42.308 "data_offset": 2048, 00:25:42.308 "data_size": 63488 00:25:42.308 } 00:25:42.308 ] 00:25:42.308 }' 00:25:42.308 00:45:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:42.308 00:45:15 -- common/autotest_common.sh@10 -- # set +x 00:25:42.875 00:45:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:43.134 [2024-04-27 00:45:16.546317] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.134 [2024-04-27 00:45:16.546392] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:43.134 [2024-04-27 00:45:16.546554] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:43.134 [2024-04-27 00:45:16.546677] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:43.134 [2024-04-27 00:45:16.546702] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:25:43.134 00:45:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:43.134 00:45:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.394 00:45:16 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:43.394 00:45:16 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:43.394 00:45:16 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@12 -- # local i 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.394 00:45:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:43.656 /dev/nbd0 00:25:43.656 00:45:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:43.656 00:45:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:43.656 00:45:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:43.656 00:45:17 -- common/autotest_common.sh@855 -- # local i 00:25:43.656 00:45:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:43.656 00:45:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:43.656 00:45:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:43.656 00:45:17 -- common/autotest_common.sh@859 -- # break 00:25:43.656 00:45:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:43.656 00:45:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:43.656 00:45:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:43.656 1+0 records in 00:25:43.656 1+0 records out 00:25:43.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050075 s, 8.2 MB/s 00:25:43.656 00:45:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:43.656 00:45:17 -- common/autotest_common.sh@872 -- # size=4096 00:25:43.656 00:45:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:43.656 00:45:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:43.656 00:45:17 -- common/autotest_common.sh@875 -- # return 0 00:25:43.656 00:45:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:43.656 00:45:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.656 00:45:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:43.915 /dev/nbd1 00:25:43.915 00:45:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:43.915 00:45:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:43.915 00:45:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:43.915 00:45:17 -- common/autotest_common.sh@855 -- # local i 00:25:43.915 00:45:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:43.915 00:45:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:43.915 00:45:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:43.915 00:45:17 -- common/autotest_common.sh@859 -- # break 00:25:43.915 00:45:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:43.915 00:45:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:43.915 00:45:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:43.915 1+0 records in 00:25:43.915 1+0 records out 00:25:43.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582964 s, 7.0 MB/s 00:25:43.915 00:45:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:43.915 00:45:17 -- common/autotest_common.sh@872 -- # size=4096 00:25:43.915 00:45:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:43.915 00:45:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:43.915 00:45:17 -- common/autotest_common.sh@875 -- # return 0 00:25:43.915 00:45:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:43.915 00:45:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:43.915 00:45:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:44.174 00:45:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:44.174 00:45:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:44.174 00:45:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:44.174 00:45:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:44.174 00:45:17 -- bdev/nbd_common.sh@51 -- # local i 00:25:44.174 00:45:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:44.174 00:45:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@41 -- # break 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@45 -- # return 0 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:44.433 00:45:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@41 -- # break 00:25:44.692 00:45:18 -- bdev/nbd_common.sh@45 -- # return 0 00:25:44.692 00:45:18 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:44.692 00:45:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:44.692 00:45:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:44.692 00:45:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:44.950 00:45:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:45.209 [2024-04-27 00:45:18.631989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:45.209 [2024-04-27 00:45:18.632107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.209 [2024-04-27 00:45:18.632149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:45.209 [2024-04-27 00:45:18.632183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.209 [2024-04-27 00:45:18.634977] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.209 [2024-04-27 00:45:18.635061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:45.209 [2024-04-27 00:45:18.635186] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:45.209 [2024-04-27 00:45:18.635300] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:45.209 BaseBdev1 00:25:45.209 00:45:18 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:45.209 00:45:18 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:25:45.209 00:45:18 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:45.468 00:45:18 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:45.727 [2024-04-27 00:45:19.160203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:45.727 [2024-04-27 00:45:19.160375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.727 [2024-04-27 00:45:19.160456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:45.727 [2024-04-27 00:45:19.160495] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.727 [2024-04-27 00:45:19.161207] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.727 [2024-04-27 00:45:19.161296] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:45.727 [2024-04-27 00:45:19.161442] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:45.727 [2024-04-27 00:45:19.161462] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:45.727 [2024-04-27 00:45:19.161470] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:45.727 [2024-04-27 00:45:19.161496] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:25:45.727 [2024-04-27 00:45:19.161605] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:45.727 BaseBdev2 00:25:45.727 00:45:19 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:45.727 00:45:19 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:45.727 00:45:19 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:45.986 00:45:19 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:46.245 [2024-04-27 00:45:19.628211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:46.245 [2024-04-27 00:45:19.628318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.245 [2024-04-27 00:45:19.628399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:46.245 [2024-04-27 00:45:19.628427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.245 [2024-04-27 00:45:19.629026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.245 [2024-04-27 00:45:19.629119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:46.245 [2024-04-27 00:45:19.629226] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:46.245 [2024-04-27 00:45:19.629256] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:46.245 BaseBdev3 00:25:46.245 00:45:19 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:46.504 00:45:19 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:46.762 [2024-04-27 00:45:20.116448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:46.762 [2024-04-27 00:45:20.116627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.762 [2024-04-27 00:45:20.116684] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:46.763 [2024-04-27 00:45:20.116727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.763 [2024-04-27 00:45:20.117424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.763 [2024-04-27 00:45:20.117530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:46.763 [2024-04-27 00:45:20.117691] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:46.763 [2024-04-27 00:45:20.117758] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:46.763 spare 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.763 00:45:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.763 [2024-04-27 00:45:20.217935] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:46.763 [2024-04-27 00:45:20.218004] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:25:46.763 [2024-04-27 00:45:20.218247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:25:46.763 [2024-04-27 00:45:20.222581] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:46.763 [2024-04-27 00:45:20.222607] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:25:46.763 [2024-04-27 00:45:20.222859] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.021 00:45:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:47.021 "name": "raid_bdev1", 00:25:47.021 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:47.021 "strip_size_kb": 64, 00:25:47.021 "state": "online", 00:25:47.021 "raid_level": "raid5f", 00:25:47.021 "superblock": true, 00:25:47.021 "num_base_bdevs": 3, 00:25:47.021 "num_base_bdevs_discovered": 3, 00:25:47.021 "num_base_bdevs_operational": 3, 00:25:47.021 "base_bdevs_list": [ 00:25:47.021 { 00:25:47.021 "name": "spare", 00:25:47.021 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:47.021 "is_configured": true, 00:25:47.021 "data_offset": 2048, 00:25:47.021 "data_size": 63488 00:25:47.021 }, 00:25:47.021 { 00:25:47.021 "name": "BaseBdev2", 00:25:47.021 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:47.021 "is_configured": true, 00:25:47.021 "data_offset": 2048, 00:25:47.021 "data_size": 63488 00:25:47.021 }, 00:25:47.021 { 00:25:47.021 "name": "BaseBdev3", 00:25:47.021 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:47.021 "is_configured": true, 00:25:47.021 "data_offset": 2048, 00:25:47.021 "data_size": 63488 00:25:47.021 } 00:25:47.021 ] 00:25:47.021 }' 00:25:47.021 00:45:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:47.021 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:47.589 00:45:20 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:47.589 00:45:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:47.589 00:45:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:47.589 00:45:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:47.589 00:45:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:47.589 00:45:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.589 00:45:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.848 00:45:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:47.848 "name": "raid_bdev1", 00:25:47.848 "uuid": "e69ffbc2-8144-446c-bc2b-1812199b5787", 00:25:47.848 "strip_size_kb": 64, 00:25:47.848 "state": "online", 00:25:47.848 "raid_level": "raid5f", 00:25:47.848 "superblock": true, 00:25:47.848 "num_base_bdevs": 3, 00:25:47.848 "num_base_bdevs_discovered": 3, 00:25:47.848 "num_base_bdevs_operational": 3, 00:25:47.848 "base_bdevs_list": [ 00:25:47.848 { 00:25:47.848 "name": "spare", 00:25:47.848 "uuid": "28d5b486-f123-5d41-951c-47a6a6583772", 00:25:47.848 "is_configured": true, 00:25:47.848 "data_offset": 2048, 00:25:47.848 "data_size": 63488 00:25:47.848 }, 00:25:47.848 { 00:25:47.848 "name": "BaseBdev2", 00:25:47.848 "uuid": "783eb4b5-79d6-50f3-9567-d109d5f1c2a3", 00:25:47.848 "is_configured": true, 00:25:47.848 "data_offset": 2048, 00:25:47.848 "data_size": 63488 00:25:47.848 }, 00:25:47.848 { 00:25:47.848 "name": "BaseBdev3", 00:25:47.848 "uuid": "f08af478-7aba-52ed-896c-81b405d7950d", 00:25:47.848 "is_configured": true, 00:25:47.848 "data_offset": 2048, 00:25:47.848 "data_size": 63488 00:25:47.848 } 00:25:47.848 ] 00:25:47.848 }' 00:25:47.848 00:45:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:47.848 00:45:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:47.848 00:45:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:47.848 00:45:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:47.848 00:45:21 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.848 00:45:21 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:48.107 00:45:21 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:48.107 00:45:21 -- bdev/bdev_raid.sh@709 -- # killprocess 136734 00:25:48.107 00:45:21 -- common/autotest_common.sh@936 -- # '[' -z 136734 ']' 00:25:48.107 00:45:21 -- common/autotest_common.sh@940 -- # kill -0 136734 00:25:48.107 00:45:21 -- common/autotest_common.sh@941 -- # uname 00:25:48.107 00:45:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.107 00:45:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 136734 00:25:48.107 00:45:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.107 00:45:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.107 00:45:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 136734' 00:25:48.107 killing process with pid 136734 00:25:48.107 00:45:21 -- common/autotest_common.sh@955 -- # kill 136734 00:25:48.107 Received shutdown signal, test time was about 60.000000 seconds 00:25:48.107 00:25:48.107 Latency(us) 00:25:48.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.107 =================================================================================================================== 00:25:48.107 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:48.107 00:45:21 -- common/autotest_common.sh@960 -- # wait 136734 00:25:48.107 [2024-04-27 00:45:21.658638] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:48.107 [2024-04-27 00:45:21.658926] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:48.107 [2024-04-27 00:45:21.659190] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:48.107 [2024-04-27 00:45:21.659315] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:25:48.366 [2024-04-27 00:45:21.951079] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:49.742 00:25:49.742 real 0m25.629s 00:25:49.742 user 0m40.147s 00:25:49.742 sys 0m3.116s 00:25:49.742 00:45:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:49.742 00:45:23 -- common/autotest_common.sh@10 -- # set +x 00:25:49.742 ************************************ 00:25:49.742 END TEST raid5f_rebuild_test_sb 00:25:49.742 ************************************ 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:25:49.742 00:45:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:25:49.742 00:45:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:49.742 00:45:23 -- common/autotest_common.sh@10 -- # set +x 00:25:49.742 ************************************ 00:25:49.742 START TEST raid5f_state_function_test 00:25:49.742 ************************************ 00:25:49.742 00:45:23 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 false 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=137379 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137379' 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:49.742 Process raid pid: 137379 00:25:49.742 00:45:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137379 /var/tmp/spdk-raid.sock 00:25:49.742 00:45:23 -- common/autotest_common.sh@817 -- # '[' -z 137379 ']' 00:25:49.742 00:45:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:49.742 00:45:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:49.742 00:45:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:49.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:49.742 00:45:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:49.742 00:45:23 -- common/autotest_common.sh@10 -- # set +x 00:25:49.742 [2024-04-27 00:45:23.231678] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:25:49.742 [2024-04-27 00:45:23.232158] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.000 [2024-04-27 00:45:23.402071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.352 [2024-04-27 00:45:23.599536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.352 [2024-04-27 00:45:23.789376] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:50.610 00:45:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:50.610 00:45:24 -- common/autotest_common.sh@850 -- # return 0 00:25:50.610 00:45:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:50.868 [2024-04-27 00:45:24.409678] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:50.868 [2024-04-27 00:45:24.409946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:50.868 [2024-04-27 00:45:24.410073] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:50.868 [2024-04-27 00:45:24.410138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:50.868 [2024-04-27 00:45:24.410279] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:50.868 [2024-04-27 00:45:24.410503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:50.868 [2024-04-27 00:45:24.410620] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:50.868 [2024-04-27 00:45:24.410687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.868 00:45:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.126 00:45:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:51.126 "name": "Existed_Raid", 00:25:51.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.126 "strip_size_kb": 64, 00:25:51.126 "state": "configuring", 00:25:51.126 "raid_level": "raid5f", 00:25:51.126 "superblock": false, 00:25:51.126 "num_base_bdevs": 4, 00:25:51.126 "num_base_bdevs_discovered": 0, 00:25:51.126 "num_base_bdevs_operational": 4, 00:25:51.126 "base_bdevs_list": [ 00:25:51.126 { 00:25:51.126 "name": "BaseBdev1", 00:25:51.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.126 "is_configured": false, 00:25:51.126 "data_offset": 0, 00:25:51.126 "data_size": 0 00:25:51.126 }, 00:25:51.126 { 00:25:51.126 "name": "BaseBdev2", 00:25:51.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.126 "is_configured": false, 00:25:51.126 "data_offset": 0, 00:25:51.126 "data_size": 0 00:25:51.126 }, 00:25:51.126 { 00:25:51.126 "name": "BaseBdev3", 00:25:51.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.126 "is_configured": false, 00:25:51.126 "data_offset": 0, 00:25:51.126 "data_size": 0 00:25:51.126 }, 00:25:51.126 { 00:25:51.126 "name": "BaseBdev4", 00:25:51.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.126 "is_configured": false, 00:25:51.126 "data_offset": 0, 00:25:51.126 "data_size": 0 00:25:51.126 } 00:25:51.126 ] 00:25:51.126 }' 00:25:51.126 00:45:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:51.126 00:45:24 -- common/autotest_common.sh@10 -- # set +x 00:25:51.692 00:45:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:51.952 [2024-04-27 00:45:25.441870] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:51.952 [2024-04-27 00:45:25.442186] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:25:51.952 00:45:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:52.210 [2024-04-27 00:45:25.645940] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:52.210 [2024-04-27 00:45:25.646301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:52.210 [2024-04-27 00:45:25.646454] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:52.210 [2024-04-27 00:45:25.646544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:52.210 [2024-04-27 00:45:25.646869] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:52.210 [2024-04-27 00:45:25.646966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:52.210 [2024-04-27 00:45:25.647211] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:52.210 [2024-04-27 00:45:25.647287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:52.210 00:45:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:52.468 [2024-04-27 00:45:25.872998] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:52.468 BaseBdev1 00:25:52.468 00:45:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:25:52.468 00:45:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:25:52.468 00:45:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:52.468 00:45:25 -- common/autotest_common.sh@887 -- # local i 00:25:52.468 00:45:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:52.468 00:45:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:52.468 00:45:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:52.726 00:45:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:52.985 [ 00:25:52.985 { 00:25:52.985 "name": "BaseBdev1", 00:25:52.985 "aliases": [ 00:25:52.985 "5742888c-9610-4633-a07e-91d91f8343e8" 00:25:52.985 ], 00:25:52.985 "product_name": "Malloc disk", 00:25:52.985 "block_size": 512, 00:25:52.985 "num_blocks": 65536, 00:25:52.985 "uuid": "5742888c-9610-4633-a07e-91d91f8343e8", 00:25:52.985 "assigned_rate_limits": { 00:25:52.985 "rw_ios_per_sec": 0, 00:25:52.985 "rw_mbytes_per_sec": 0, 00:25:52.985 "r_mbytes_per_sec": 0, 00:25:52.985 "w_mbytes_per_sec": 0 00:25:52.985 }, 00:25:52.985 "claimed": true, 00:25:52.985 "claim_type": "exclusive_write", 00:25:52.985 "zoned": false, 00:25:52.985 "supported_io_types": { 00:25:52.985 "read": true, 00:25:52.985 "write": true, 00:25:52.985 "unmap": true, 00:25:52.985 "write_zeroes": true, 00:25:52.985 "flush": true, 00:25:52.985 "reset": true, 00:25:52.985 "compare": false, 00:25:52.985 "compare_and_write": false, 00:25:52.985 "abort": true, 00:25:52.985 "nvme_admin": false, 00:25:52.985 "nvme_io": false 00:25:52.985 }, 00:25:52.985 "memory_domains": [ 00:25:52.985 { 00:25:52.985 "dma_device_id": "system", 00:25:52.985 "dma_device_type": 1 00:25:52.985 }, 00:25:52.985 { 00:25:52.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:52.985 "dma_device_type": 2 00:25:52.985 } 00:25:52.985 ], 00:25:52.985 "driver_specific": {} 00:25:52.985 } 00:25:52.985 ] 00:25:52.985 00:45:26 -- common/autotest_common.sh@893 -- # return 0 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.985 00:45:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.243 00:45:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.243 "name": "Existed_Raid", 00:25:53.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.243 "strip_size_kb": 64, 00:25:53.243 "state": "configuring", 00:25:53.243 "raid_level": "raid5f", 00:25:53.243 "superblock": false, 00:25:53.243 "num_base_bdevs": 4, 00:25:53.243 "num_base_bdevs_discovered": 1, 00:25:53.243 "num_base_bdevs_operational": 4, 00:25:53.243 "base_bdevs_list": [ 00:25:53.243 { 00:25:53.243 "name": "BaseBdev1", 00:25:53.243 "uuid": "5742888c-9610-4633-a07e-91d91f8343e8", 00:25:53.243 "is_configured": true, 00:25:53.243 "data_offset": 0, 00:25:53.243 "data_size": 65536 00:25:53.243 }, 00:25:53.244 { 00:25:53.244 "name": "BaseBdev2", 00:25:53.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.244 "is_configured": false, 00:25:53.244 "data_offset": 0, 00:25:53.244 "data_size": 0 00:25:53.244 }, 00:25:53.244 { 00:25:53.244 "name": "BaseBdev3", 00:25:53.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.244 "is_configured": false, 00:25:53.244 "data_offset": 0, 00:25:53.244 "data_size": 0 00:25:53.244 }, 00:25:53.244 { 00:25:53.244 "name": "BaseBdev4", 00:25:53.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.244 "is_configured": false, 00:25:53.244 "data_offset": 0, 00:25:53.244 "data_size": 0 00:25:53.244 } 00:25:53.244 ] 00:25:53.244 }' 00:25:53.244 00:45:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.244 00:45:26 -- common/autotest_common.sh@10 -- # set +x 00:25:53.810 00:45:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:53.810 [2024-04-27 00:45:27.357399] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:53.810 [2024-04-27 00:45:27.357732] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:25:53.810 00:45:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:25:53.811 00:45:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:54.069 [2024-04-27 00:45:27.621458] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:54.069 [2024-04-27 00:45:27.623873] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:54.069 [2024-04-27 00:45:27.624111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:54.069 [2024-04-27 00:45:27.624263] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:54.069 [2024-04-27 00:45:27.624342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:54.069 [2024-04-27 00:45:27.624575] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:54.069 [2024-04-27 00:45:27.624661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:54.069 00:45:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:54.070 00:45:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:54.070 00:45:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:54.070 00:45:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:54.070 00:45:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:54.070 00:45:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.070 00:45:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:54.637 00:45:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:54.637 "name": "Existed_Raid", 00:25:54.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.637 "strip_size_kb": 64, 00:25:54.637 "state": "configuring", 00:25:54.637 "raid_level": "raid5f", 00:25:54.637 "superblock": false, 00:25:54.637 "num_base_bdevs": 4, 00:25:54.637 "num_base_bdevs_discovered": 1, 00:25:54.637 "num_base_bdevs_operational": 4, 00:25:54.637 "base_bdevs_list": [ 00:25:54.637 { 00:25:54.637 "name": "BaseBdev1", 00:25:54.637 "uuid": "5742888c-9610-4633-a07e-91d91f8343e8", 00:25:54.637 "is_configured": true, 00:25:54.637 "data_offset": 0, 00:25:54.637 "data_size": 65536 00:25:54.637 }, 00:25:54.637 { 00:25:54.637 "name": "BaseBdev2", 00:25:54.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.637 "is_configured": false, 00:25:54.637 "data_offset": 0, 00:25:54.637 "data_size": 0 00:25:54.637 }, 00:25:54.637 { 00:25:54.637 "name": "BaseBdev3", 00:25:54.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.637 "is_configured": false, 00:25:54.637 "data_offset": 0, 00:25:54.637 "data_size": 0 00:25:54.637 }, 00:25:54.637 { 00:25:54.637 "name": "BaseBdev4", 00:25:54.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.637 "is_configured": false, 00:25:54.637 "data_offset": 0, 00:25:54.637 "data_size": 0 00:25:54.637 } 00:25:54.637 ] 00:25:54.637 }' 00:25:54.637 00:45:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:54.637 00:45:27 -- common/autotest_common.sh@10 -- # set +x 00:25:55.205 00:45:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:55.464 [2024-04-27 00:45:28.814282] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:55.464 BaseBdev2 00:25:55.464 00:45:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:25:55.464 00:45:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:25:55.464 00:45:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:55.464 00:45:28 -- common/autotest_common.sh@887 -- # local i 00:25:55.464 00:45:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:55.464 00:45:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:55.464 00:45:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:55.723 00:45:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:55.723 [ 00:25:55.723 { 00:25:55.723 "name": "BaseBdev2", 00:25:55.723 "aliases": [ 00:25:55.723 "6fa9f841-bb02-45ce-ab3e-955aae8be998" 00:25:55.723 ], 00:25:55.723 "product_name": "Malloc disk", 00:25:55.723 "block_size": 512, 00:25:55.723 "num_blocks": 65536, 00:25:55.723 "uuid": "6fa9f841-bb02-45ce-ab3e-955aae8be998", 00:25:55.723 "assigned_rate_limits": { 00:25:55.723 "rw_ios_per_sec": 0, 00:25:55.723 "rw_mbytes_per_sec": 0, 00:25:55.723 "r_mbytes_per_sec": 0, 00:25:55.723 "w_mbytes_per_sec": 0 00:25:55.723 }, 00:25:55.723 "claimed": true, 00:25:55.723 "claim_type": "exclusive_write", 00:25:55.723 "zoned": false, 00:25:55.723 "supported_io_types": { 00:25:55.723 "read": true, 00:25:55.723 "write": true, 00:25:55.723 "unmap": true, 00:25:55.723 "write_zeroes": true, 00:25:55.723 "flush": true, 00:25:55.723 "reset": true, 00:25:55.723 "compare": false, 00:25:55.723 "compare_and_write": false, 00:25:55.723 "abort": true, 00:25:55.723 "nvme_admin": false, 00:25:55.723 "nvme_io": false 00:25:55.723 }, 00:25:55.723 "memory_domains": [ 00:25:55.723 { 00:25:55.723 "dma_device_id": "system", 00:25:55.723 "dma_device_type": 1 00:25:55.723 }, 00:25:55.723 { 00:25:55.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.723 "dma_device_type": 2 00:25:55.723 } 00:25:55.723 ], 00:25:55.723 "driver_specific": {} 00:25:55.723 } 00:25:55.723 ] 00:25:55.723 00:45:29 -- common/autotest_common.sh@893 -- # return 0 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.723 00:45:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.982 00:45:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:55.982 "name": "Existed_Raid", 00:25:55.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.982 "strip_size_kb": 64, 00:25:55.982 "state": "configuring", 00:25:55.982 "raid_level": "raid5f", 00:25:55.982 "superblock": false, 00:25:55.982 "num_base_bdevs": 4, 00:25:55.982 "num_base_bdevs_discovered": 2, 00:25:55.982 "num_base_bdevs_operational": 4, 00:25:55.982 "base_bdevs_list": [ 00:25:55.982 { 00:25:55.982 "name": "BaseBdev1", 00:25:55.982 "uuid": "5742888c-9610-4633-a07e-91d91f8343e8", 00:25:55.982 "is_configured": true, 00:25:55.982 "data_offset": 0, 00:25:55.982 "data_size": 65536 00:25:55.982 }, 00:25:55.982 { 00:25:55.982 "name": "BaseBdev2", 00:25:55.982 "uuid": "6fa9f841-bb02-45ce-ab3e-955aae8be998", 00:25:55.982 "is_configured": true, 00:25:55.982 "data_offset": 0, 00:25:55.982 "data_size": 65536 00:25:55.982 }, 00:25:55.982 { 00:25:55.982 "name": "BaseBdev3", 00:25:55.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.982 "is_configured": false, 00:25:55.982 "data_offset": 0, 00:25:55.982 "data_size": 0 00:25:55.982 }, 00:25:55.982 { 00:25:55.982 "name": "BaseBdev4", 00:25:55.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.982 "is_configured": false, 00:25:55.982 "data_offset": 0, 00:25:55.982 "data_size": 0 00:25:55.982 } 00:25:55.982 ] 00:25:55.982 }' 00:25:55.982 00:45:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:55.982 00:45:29 -- common/autotest_common.sh@10 -- # set +x 00:25:56.549 00:45:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:56.808 [2024-04-27 00:45:30.360963] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:56.808 BaseBdev3 00:25:56.808 00:45:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:25:56.808 00:45:30 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:25:56.808 00:45:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:56.808 00:45:30 -- common/autotest_common.sh@887 -- # local i 00:25:56.808 00:45:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:56.808 00:45:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:56.808 00:45:30 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:57.067 00:45:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:57.327 [ 00:25:57.327 { 00:25:57.327 "name": "BaseBdev3", 00:25:57.327 "aliases": [ 00:25:57.327 "dc77d08b-0eeb-44c0-9174-8215318a5351" 00:25:57.327 ], 00:25:57.327 "product_name": "Malloc disk", 00:25:57.327 "block_size": 512, 00:25:57.327 "num_blocks": 65536, 00:25:57.327 "uuid": "dc77d08b-0eeb-44c0-9174-8215318a5351", 00:25:57.327 "assigned_rate_limits": { 00:25:57.327 "rw_ios_per_sec": 0, 00:25:57.327 "rw_mbytes_per_sec": 0, 00:25:57.327 "r_mbytes_per_sec": 0, 00:25:57.327 "w_mbytes_per_sec": 0 00:25:57.327 }, 00:25:57.327 "claimed": true, 00:25:57.327 "claim_type": "exclusive_write", 00:25:57.327 "zoned": false, 00:25:57.327 "supported_io_types": { 00:25:57.327 "read": true, 00:25:57.327 "write": true, 00:25:57.327 "unmap": true, 00:25:57.327 "write_zeroes": true, 00:25:57.327 "flush": true, 00:25:57.327 "reset": true, 00:25:57.327 "compare": false, 00:25:57.327 "compare_and_write": false, 00:25:57.327 "abort": true, 00:25:57.327 "nvme_admin": false, 00:25:57.327 "nvme_io": false 00:25:57.327 }, 00:25:57.327 "memory_domains": [ 00:25:57.327 { 00:25:57.327 "dma_device_id": "system", 00:25:57.327 "dma_device_type": 1 00:25:57.327 }, 00:25:57.327 { 00:25:57.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.327 "dma_device_type": 2 00:25:57.327 } 00:25:57.327 ], 00:25:57.327 "driver_specific": {} 00:25:57.327 } 00:25:57.327 ] 00:25:57.327 00:45:30 -- common/autotest_common.sh@893 -- # return 0 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.327 00:45:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.586 00:45:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:57.586 "name": "Existed_Raid", 00:25:57.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.586 "strip_size_kb": 64, 00:25:57.586 "state": "configuring", 00:25:57.586 "raid_level": "raid5f", 00:25:57.586 "superblock": false, 00:25:57.586 "num_base_bdevs": 4, 00:25:57.586 "num_base_bdevs_discovered": 3, 00:25:57.586 "num_base_bdevs_operational": 4, 00:25:57.586 "base_bdevs_list": [ 00:25:57.586 { 00:25:57.586 "name": "BaseBdev1", 00:25:57.586 "uuid": "5742888c-9610-4633-a07e-91d91f8343e8", 00:25:57.586 "is_configured": true, 00:25:57.586 "data_offset": 0, 00:25:57.586 "data_size": 65536 00:25:57.586 }, 00:25:57.586 { 00:25:57.586 "name": "BaseBdev2", 00:25:57.586 "uuid": "6fa9f841-bb02-45ce-ab3e-955aae8be998", 00:25:57.586 "is_configured": true, 00:25:57.586 "data_offset": 0, 00:25:57.586 "data_size": 65536 00:25:57.586 }, 00:25:57.586 { 00:25:57.586 "name": "BaseBdev3", 00:25:57.586 "uuid": "dc77d08b-0eeb-44c0-9174-8215318a5351", 00:25:57.586 "is_configured": true, 00:25:57.586 "data_offset": 0, 00:25:57.586 "data_size": 65536 00:25:57.586 }, 00:25:57.586 { 00:25:57.586 "name": "BaseBdev4", 00:25:57.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.586 "is_configured": false, 00:25:57.586 "data_offset": 0, 00:25:57.586 "data_size": 0 00:25:57.586 } 00:25:57.586 ] 00:25:57.586 }' 00:25:57.586 00:45:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:57.586 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:25:58.153 00:45:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:58.413 [2024-04-27 00:45:31.891477] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:58.413 [2024-04-27 00:45:31.891877] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:25:58.413 [2024-04-27 00:45:31.892008] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:58.413 [2024-04-27 00:45:31.892204] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:58.413 [2024-04-27 00:45:31.899141] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:25:58.413 [2024-04-27 00:45:31.899300] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:25:58.413 [2024-04-27 00:45:31.899748] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.413 BaseBdev4 00:25:58.413 00:45:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:25:58.413 00:45:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:25:58.413 00:45:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:25:58.413 00:45:31 -- common/autotest_common.sh@887 -- # local i 00:25:58.413 00:45:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:25:58.413 00:45:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:25:58.413 00:45:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:58.671 00:45:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:58.930 [ 00:25:58.930 { 00:25:58.930 "name": "BaseBdev4", 00:25:58.930 "aliases": [ 00:25:58.930 "66e21748-a254-4a2e-92b4-fd05febb7076" 00:25:58.930 ], 00:25:58.930 "product_name": "Malloc disk", 00:25:58.930 "block_size": 512, 00:25:58.930 "num_blocks": 65536, 00:25:58.930 "uuid": "66e21748-a254-4a2e-92b4-fd05febb7076", 00:25:58.930 "assigned_rate_limits": { 00:25:58.930 "rw_ios_per_sec": 0, 00:25:58.930 "rw_mbytes_per_sec": 0, 00:25:58.930 "r_mbytes_per_sec": 0, 00:25:58.930 "w_mbytes_per_sec": 0 00:25:58.930 }, 00:25:58.930 "claimed": true, 00:25:58.930 "claim_type": "exclusive_write", 00:25:58.930 "zoned": false, 00:25:58.930 "supported_io_types": { 00:25:58.930 "read": true, 00:25:58.930 "write": true, 00:25:58.930 "unmap": true, 00:25:58.930 "write_zeroes": true, 00:25:58.930 "flush": true, 00:25:58.930 "reset": true, 00:25:58.930 "compare": false, 00:25:58.930 "compare_and_write": false, 00:25:58.930 "abort": true, 00:25:58.930 "nvme_admin": false, 00:25:58.930 "nvme_io": false 00:25:58.930 }, 00:25:58.930 "memory_domains": [ 00:25:58.930 { 00:25:58.930 "dma_device_id": "system", 00:25:58.930 "dma_device_type": 1 00:25:58.930 }, 00:25:58.930 { 00:25:58.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:58.930 "dma_device_type": 2 00:25:58.930 } 00:25:58.930 ], 00:25:58.930 "driver_specific": {} 00:25:58.930 } 00:25:58.930 ] 00:25:58.930 00:45:32 -- common/autotest_common.sh@893 -- # return 0 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.930 00:45:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.189 00:45:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:59.189 "name": "Existed_Raid", 00:25:59.189 "uuid": "46b2db53-22a1-4af0-9941-a515263801cd", 00:25:59.189 "strip_size_kb": 64, 00:25:59.189 "state": "online", 00:25:59.189 "raid_level": "raid5f", 00:25:59.189 "superblock": false, 00:25:59.189 "num_base_bdevs": 4, 00:25:59.189 "num_base_bdevs_discovered": 4, 00:25:59.189 "num_base_bdevs_operational": 4, 00:25:59.189 "base_bdevs_list": [ 00:25:59.189 { 00:25:59.189 "name": "BaseBdev1", 00:25:59.189 "uuid": "5742888c-9610-4633-a07e-91d91f8343e8", 00:25:59.189 "is_configured": true, 00:25:59.189 "data_offset": 0, 00:25:59.189 "data_size": 65536 00:25:59.189 }, 00:25:59.189 { 00:25:59.189 "name": "BaseBdev2", 00:25:59.189 "uuid": "6fa9f841-bb02-45ce-ab3e-955aae8be998", 00:25:59.189 "is_configured": true, 00:25:59.189 "data_offset": 0, 00:25:59.189 "data_size": 65536 00:25:59.189 }, 00:25:59.189 { 00:25:59.189 "name": "BaseBdev3", 00:25:59.189 "uuid": "dc77d08b-0eeb-44c0-9174-8215318a5351", 00:25:59.189 "is_configured": true, 00:25:59.189 "data_offset": 0, 00:25:59.189 "data_size": 65536 00:25:59.189 }, 00:25:59.189 { 00:25:59.189 "name": "BaseBdev4", 00:25:59.189 "uuid": "66e21748-a254-4a2e-92b4-fd05febb7076", 00:25:59.189 "is_configured": true, 00:25:59.189 "data_offset": 0, 00:25:59.189 "data_size": 65536 00:25:59.189 } 00:25:59.189 ] 00:25:59.189 }' 00:25:59.189 00:45:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:59.189 00:45:32 -- common/autotest_common.sh@10 -- # set +x 00:25:59.756 00:45:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:00.014 [2024-04-27 00:45:33.479086] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.014 00:45:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.306 00:45:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:00.306 "name": "Existed_Raid", 00:26:00.306 "uuid": "46b2db53-22a1-4af0-9941-a515263801cd", 00:26:00.306 "strip_size_kb": 64, 00:26:00.306 "state": "online", 00:26:00.306 "raid_level": "raid5f", 00:26:00.306 "superblock": false, 00:26:00.306 "num_base_bdevs": 4, 00:26:00.306 "num_base_bdevs_discovered": 3, 00:26:00.306 "num_base_bdevs_operational": 3, 00:26:00.306 "base_bdevs_list": [ 00:26:00.306 { 00:26:00.306 "name": null, 00:26:00.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.306 "is_configured": false, 00:26:00.306 "data_offset": 0, 00:26:00.306 "data_size": 65536 00:26:00.306 }, 00:26:00.306 { 00:26:00.306 "name": "BaseBdev2", 00:26:00.306 "uuid": "6fa9f841-bb02-45ce-ab3e-955aae8be998", 00:26:00.306 "is_configured": true, 00:26:00.306 "data_offset": 0, 00:26:00.306 "data_size": 65536 00:26:00.306 }, 00:26:00.306 { 00:26:00.306 "name": "BaseBdev3", 00:26:00.306 "uuid": "dc77d08b-0eeb-44c0-9174-8215318a5351", 00:26:00.306 "is_configured": true, 00:26:00.306 "data_offset": 0, 00:26:00.306 "data_size": 65536 00:26:00.306 }, 00:26:00.306 { 00:26:00.306 "name": "BaseBdev4", 00:26:00.306 "uuid": "66e21748-a254-4a2e-92b4-fd05febb7076", 00:26:00.306 "is_configured": true, 00:26:00.306 "data_offset": 0, 00:26:00.306 "data_size": 65536 00:26:00.306 } 00:26:00.306 ] 00:26:00.306 }' 00:26:00.306 00:45:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:00.306 00:45:33 -- common/autotest_common.sh@10 -- # set +x 00:26:01.242 00:45:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:01.242 00:45:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:01.242 00:45:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.242 00:45:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:01.242 00:45:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:01.242 00:45:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:01.242 00:45:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:01.500 [2024-04-27 00:45:34.946966] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:01.500 [2024-04-27 00:45:34.947371] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:01.500 [2024-04-27 00:45:35.014954] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.500 00:45:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:01.500 00:45:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:01.500 00:45:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.500 00:45:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:01.759 00:45:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:01.759 00:45:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:01.759 00:45:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:02.018 [2024-04-27 00:45:35.507246] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:02.018 00:45:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:02.018 00:45:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:02.018 00:45:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.018 00:45:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:02.277 00:45:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:02.277 00:45:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:02.277 00:45:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:02.536 [2024-04-27 00:45:36.064147] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:02.536 [2024-04-27 00:45:36.064553] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:26:02.795 00:45:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:02.795 00:45:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:02.795 00:45:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.795 00:45:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:03.055 00:45:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:03.055 00:45:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:03.055 00:45:36 -- bdev/bdev_raid.sh@287 -- # killprocess 137379 00:26:03.055 00:45:36 -- common/autotest_common.sh@936 -- # '[' -z 137379 ']' 00:26:03.055 00:45:36 -- common/autotest_common.sh@940 -- # kill -0 137379 00:26:03.055 00:45:36 -- common/autotest_common.sh@941 -- # uname 00:26:03.055 00:45:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:03.055 00:45:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137379 00:26:03.055 killing process with pid 137379 00:26:03.055 00:45:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:03.055 00:45:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:03.055 00:45:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137379' 00:26:03.055 00:45:36 -- common/autotest_common.sh@955 -- # kill 137379 00:26:03.055 00:45:36 -- common/autotest_common.sh@960 -- # wait 137379 00:26:03.055 [2024-04-27 00:45:36.436751] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.055 [2024-04-27 00:45:36.436895] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:03.993 ************************************ 00:26:03.993 END TEST raid5f_state_function_test 00:26:03.993 ************************************ 00:26:03.993 00:45:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:03.993 00:26:03.993 real 0m14.349s 00:26:03.993 user 0m25.464s 00:26:03.993 sys 0m1.769s 00:26:03.993 00:45:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:03.993 00:45:37 -- common/autotest_common.sh@10 -- # set +x 00:26:03.993 00:45:37 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:26:03.993 00:45:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:26:03.993 00:45:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:03.993 00:45:37 -- common/autotest_common.sh@10 -- # set +x 00:26:04.252 ************************************ 00:26:04.252 START TEST raid5f_state_function_test_sb 00:26:04.252 ************************************ 00:26:04.252 00:45:37 -- common/autotest_common.sh@1111 -- # raid_state_function_test raid5f 4 true 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:26:04.252 00:45:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=137824 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137824' 00:26:04.253 Process raid pid: 137824 00:26:04.253 00:45:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137824 /var/tmp/spdk-raid.sock 00:26:04.253 00:45:37 -- common/autotest_common.sh@817 -- # '[' -z 137824 ']' 00:26:04.253 00:45:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:04.253 00:45:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:04.253 00:45:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:04.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:04.253 00:45:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:04.253 00:45:37 -- common/autotest_common.sh@10 -- # set +x 00:26:04.253 [2024-04-27 00:45:37.667528] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:04.253 [2024-04-27 00:45:37.667911] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.253 [2024-04-27 00:45:37.829045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.511 [2024-04-27 00:45:38.039954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.770 [2024-04-27 00:45:38.244620] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:05.029 00:45:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:05.029 00:45:38 -- common/autotest_common.sh@850 -- # return 0 00:26:05.029 00:45:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:05.289 [2024-04-27 00:45:38.739549] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:05.289 [2024-04-27 00:45:38.739953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:05.289 [2024-04-27 00:45:38.740080] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:05.289 [2024-04-27 00:45:38.740241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:05.289 [2024-04-27 00:45:38.740358] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:05.289 [2024-04-27 00:45:38.740481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:05.289 [2024-04-27 00:45:38.740752] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:05.289 [2024-04-27 00:45:38.740863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.289 00:45:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.548 00:45:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:05.548 "name": "Existed_Raid", 00:26:05.548 "uuid": "e32038c7-81f4-4f36-83d2-2b741c00be18", 00:26:05.548 "strip_size_kb": 64, 00:26:05.548 "state": "configuring", 00:26:05.548 "raid_level": "raid5f", 00:26:05.548 "superblock": true, 00:26:05.548 "num_base_bdevs": 4, 00:26:05.548 "num_base_bdevs_discovered": 0, 00:26:05.548 "num_base_bdevs_operational": 4, 00:26:05.548 "base_bdevs_list": [ 00:26:05.548 { 00:26:05.548 "name": "BaseBdev1", 00:26:05.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.548 "is_configured": false, 00:26:05.548 "data_offset": 0, 00:26:05.548 "data_size": 0 00:26:05.548 }, 00:26:05.548 { 00:26:05.548 "name": "BaseBdev2", 00:26:05.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.548 "is_configured": false, 00:26:05.548 "data_offset": 0, 00:26:05.548 "data_size": 0 00:26:05.548 }, 00:26:05.548 { 00:26:05.548 "name": "BaseBdev3", 00:26:05.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.548 "is_configured": false, 00:26:05.548 "data_offset": 0, 00:26:05.548 "data_size": 0 00:26:05.548 }, 00:26:05.548 { 00:26:05.548 "name": "BaseBdev4", 00:26:05.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.548 "is_configured": false, 00:26:05.548 "data_offset": 0, 00:26:05.549 "data_size": 0 00:26:05.549 } 00:26:05.549 ] 00:26:05.549 }' 00:26:05.549 00:45:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:05.549 00:45:38 -- common/autotest_common.sh@10 -- # set +x 00:26:06.117 00:45:39 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:06.377 [2024-04-27 00:45:39.875652] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:06.377 [2024-04-27 00:45:39.875984] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name Existed_Raid, state configuring 00:26:06.377 00:45:39 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:06.636 [2024-04-27 00:45:40.087680] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:06.636 [2024-04-27 00:45:40.087946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:06.636 [2024-04-27 00:45:40.088095] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:06.636 [2024-04-27 00:45:40.088177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:06.636 [2024-04-27 00:45:40.088461] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:06.636 [2024-04-27 00:45:40.088585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:06.636 [2024-04-27 00:45:40.088783] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:06.636 [2024-04-27 00:45:40.088882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:06.636 00:45:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:06.895 [2024-04-27 00:45:40.403189] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:06.895 BaseBdev1 00:26:06.895 00:45:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:26:06.895 00:45:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:06.895 00:45:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:06.895 00:45:40 -- common/autotest_common.sh@887 -- # local i 00:26:06.895 00:45:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:06.895 00:45:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:06.896 00:45:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:07.155 00:45:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:07.415 [ 00:26:07.415 { 00:26:07.415 "name": "BaseBdev1", 00:26:07.415 "aliases": [ 00:26:07.415 "67c4cfc7-cd56-4b2b-b59c-d4a4e6ce01a3" 00:26:07.415 ], 00:26:07.415 "product_name": "Malloc disk", 00:26:07.415 "block_size": 512, 00:26:07.415 "num_blocks": 65536, 00:26:07.415 "uuid": "67c4cfc7-cd56-4b2b-b59c-d4a4e6ce01a3", 00:26:07.415 "assigned_rate_limits": { 00:26:07.415 "rw_ios_per_sec": 0, 00:26:07.415 "rw_mbytes_per_sec": 0, 00:26:07.415 "r_mbytes_per_sec": 0, 00:26:07.415 "w_mbytes_per_sec": 0 00:26:07.415 }, 00:26:07.415 "claimed": true, 00:26:07.415 "claim_type": "exclusive_write", 00:26:07.415 "zoned": false, 00:26:07.415 "supported_io_types": { 00:26:07.415 "read": true, 00:26:07.415 "write": true, 00:26:07.415 "unmap": true, 00:26:07.415 "write_zeroes": true, 00:26:07.415 "flush": true, 00:26:07.415 "reset": true, 00:26:07.415 "compare": false, 00:26:07.415 "compare_and_write": false, 00:26:07.415 "abort": true, 00:26:07.415 "nvme_admin": false, 00:26:07.415 "nvme_io": false 00:26:07.415 }, 00:26:07.415 "memory_domains": [ 00:26:07.415 { 00:26:07.415 "dma_device_id": "system", 00:26:07.415 "dma_device_type": 1 00:26:07.415 }, 00:26:07.415 { 00:26:07.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.415 "dma_device_type": 2 00:26:07.415 } 00:26:07.415 ], 00:26:07.415 "driver_specific": {} 00:26:07.415 } 00:26:07.415 ] 00:26:07.415 00:45:40 -- common/autotest_common.sh@893 -- # return 0 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.415 00:45:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.675 00:45:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:07.675 "name": "Existed_Raid", 00:26:07.675 "uuid": "39631a38-2673-4a6a-a865-5eb1896f1423", 00:26:07.675 "strip_size_kb": 64, 00:26:07.675 "state": "configuring", 00:26:07.675 "raid_level": "raid5f", 00:26:07.675 "superblock": true, 00:26:07.675 "num_base_bdevs": 4, 00:26:07.675 "num_base_bdevs_discovered": 1, 00:26:07.675 "num_base_bdevs_operational": 4, 00:26:07.675 "base_bdevs_list": [ 00:26:07.675 { 00:26:07.675 "name": "BaseBdev1", 00:26:07.675 "uuid": "67c4cfc7-cd56-4b2b-b59c-d4a4e6ce01a3", 00:26:07.675 "is_configured": true, 00:26:07.675 "data_offset": 2048, 00:26:07.675 "data_size": 63488 00:26:07.675 }, 00:26:07.675 { 00:26:07.675 "name": "BaseBdev2", 00:26:07.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.675 "is_configured": false, 00:26:07.675 "data_offset": 0, 00:26:07.675 "data_size": 0 00:26:07.675 }, 00:26:07.675 { 00:26:07.675 "name": "BaseBdev3", 00:26:07.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.675 "is_configured": false, 00:26:07.675 "data_offset": 0, 00:26:07.675 "data_size": 0 00:26:07.675 }, 00:26:07.675 { 00:26:07.675 "name": "BaseBdev4", 00:26:07.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.675 "is_configured": false, 00:26:07.675 "data_offset": 0, 00:26:07.675 "data_size": 0 00:26:07.675 } 00:26:07.675 ] 00:26:07.675 }' 00:26:07.675 00:45:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:07.675 00:45:41 -- common/autotest_common.sh@10 -- # set +x 00:26:08.242 00:45:41 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:08.503 [2024-04-27 00:45:41.911572] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:08.503 [2024-04-27 00:45:41.911901] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name Existed_Raid, state configuring 00:26:08.503 00:45:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:26:08.503 00:45:41 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:08.772 00:45:42 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:09.030 BaseBdev1 00:26:09.030 00:45:42 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:26:09.030 00:45:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:26:09.030 00:45:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:09.030 00:45:42 -- common/autotest_common.sh@887 -- # local i 00:26:09.030 00:45:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:09.030 00:45:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:09.030 00:45:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:09.289 00:45:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:09.289 [ 00:26:09.289 { 00:26:09.289 "name": "BaseBdev1", 00:26:09.289 "aliases": [ 00:26:09.289 "91c3520c-ad9e-42fe-b3f1-a558bf5a5dc1" 00:26:09.289 ], 00:26:09.289 "product_name": "Malloc disk", 00:26:09.289 "block_size": 512, 00:26:09.289 "num_blocks": 65536, 00:26:09.289 "uuid": "91c3520c-ad9e-42fe-b3f1-a558bf5a5dc1", 00:26:09.289 "assigned_rate_limits": { 00:26:09.289 "rw_ios_per_sec": 0, 00:26:09.289 "rw_mbytes_per_sec": 0, 00:26:09.289 "r_mbytes_per_sec": 0, 00:26:09.289 "w_mbytes_per_sec": 0 00:26:09.289 }, 00:26:09.289 "claimed": false, 00:26:09.289 "zoned": false, 00:26:09.289 "supported_io_types": { 00:26:09.289 "read": true, 00:26:09.289 "write": true, 00:26:09.289 "unmap": true, 00:26:09.289 "write_zeroes": true, 00:26:09.289 "flush": true, 00:26:09.289 "reset": true, 00:26:09.289 "compare": false, 00:26:09.289 "compare_and_write": false, 00:26:09.289 "abort": true, 00:26:09.289 "nvme_admin": false, 00:26:09.289 "nvme_io": false 00:26:09.289 }, 00:26:09.289 "memory_domains": [ 00:26:09.289 { 00:26:09.289 "dma_device_id": "system", 00:26:09.289 "dma_device_type": 1 00:26:09.289 }, 00:26:09.289 { 00:26:09.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.290 "dma_device_type": 2 00:26:09.290 } 00:26:09.290 ], 00:26:09.290 "driver_specific": {} 00:26:09.290 } 00:26:09.290 ] 00:26:09.549 00:45:42 -- common/autotest_common.sh@893 -- # return 0 00:26:09.549 00:45:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:09.549 [2024-04-27 00:45:43.085871] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:09.549 [2024-04-27 00:45:43.088297] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:09.549 [2024-04-27 00:45:43.088565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:09.549 [2024-04-27 00:45:43.088706] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:09.549 [2024-04-27 00:45:43.088786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:09.549 [2024-04-27 00:45:43.089045] bdev.c:8084:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:09.549 [2024-04-27 00:45:43.089117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.549 00:45:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.807 00:45:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:09.807 "name": "Existed_Raid", 00:26:09.807 "uuid": "919391ee-1300-4bde-9b7a-f084f9e6bd1b", 00:26:09.807 "strip_size_kb": 64, 00:26:09.807 "state": "configuring", 00:26:09.807 "raid_level": "raid5f", 00:26:09.807 "superblock": true, 00:26:09.807 "num_base_bdevs": 4, 00:26:09.807 "num_base_bdevs_discovered": 1, 00:26:09.807 "num_base_bdevs_operational": 4, 00:26:09.807 "base_bdevs_list": [ 00:26:09.807 { 00:26:09.807 "name": "BaseBdev1", 00:26:09.807 "uuid": "91c3520c-ad9e-42fe-b3f1-a558bf5a5dc1", 00:26:09.807 "is_configured": true, 00:26:09.807 "data_offset": 2048, 00:26:09.807 "data_size": 63488 00:26:09.807 }, 00:26:09.807 { 00:26:09.807 "name": "BaseBdev2", 00:26:09.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.807 "is_configured": false, 00:26:09.807 "data_offset": 0, 00:26:09.807 "data_size": 0 00:26:09.807 }, 00:26:09.807 { 00:26:09.807 "name": "BaseBdev3", 00:26:09.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.807 "is_configured": false, 00:26:09.807 "data_offset": 0, 00:26:09.807 "data_size": 0 00:26:09.807 }, 00:26:09.807 { 00:26:09.807 "name": "BaseBdev4", 00:26:09.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.807 "is_configured": false, 00:26:09.807 "data_offset": 0, 00:26:09.807 "data_size": 0 00:26:09.807 } 00:26:09.807 ] 00:26:09.807 }' 00:26:09.807 00:45:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:09.807 00:45:43 -- common/autotest_common.sh@10 -- # set +x 00:26:10.743 00:45:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:10.743 [2024-04-27 00:45:44.245058] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:10.743 BaseBdev2 00:26:10.743 00:45:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:26:10.743 00:45:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:26:10.743 00:45:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:10.743 00:45:44 -- common/autotest_common.sh@887 -- # local i 00:26:10.743 00:45:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:10.743 00:45:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:10.743 00:45:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:11.002 00:45:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:11.261 [ 00:26:11.261 { 00:26:11.261 "name": "BaseBdev2", 00:26:11.261 "aliases": [ 00:26:11.261 "30ef0db7-6375-4fa3-85db-201baeda92d0" 00:26:11.261 ], 00:26:11.261 "product_name": "Malloc disk", 00:26:11.261 "block_size": 512, 00:26:11.261 "num_blocks": 65536, 00:26:11.261 "uuid": "30ef0db7-6375-4fa3-85db-201baeda92d0", 00:26:11.261 "assigned_rate_limits": { 00:26:11.261 "rw_ios_per_sec": 0, 00:26:11.261 "rw_mbytes_per_sec": 0, 00:26:11.261 "r_mbytes_per_sec": 0, 00:26:11.261 "w_mbytes_per_sec": 0 00:26:11.261 }, 00:26:11.261 "claimed": true, 00:26:11.261 "claim_type": "exclusive_write", 00:26:11.261 "zoned": false, 00:26:11.261 "supported_io_types": { 00:26:11.261 "read": true, 00:26:11.261 "write": true, 00:26:11.261 "unmap": true, 00:26:11.261 "write_zeroes": true, 00:26:11.261 "flush": true, 00:26:11.261 "reset": true, 00:26:11.261 "compare": false, 00:26:11.261 "compare_and_write": false, 00:26:11.261 "abort": true, 00:26:11.261 "nvme_admin": false, 00:26:11.261 "nvme_io": false 00:26:11.261 }, 00:26:11.261 "memory_domains": [ 00:26:11.261 { 00:26:11.261 "dma_device_id": "system", 00:26:11.261 "dma_device_type": 1 00:26:11.261 }, 00:26:11.261 { 00:26:11.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.261 "dma_device_type": 2 00:26:11.261 } 00:26:11.261 ], 00:26:11.261 "driver_specific": {} 00:26:11.261 } 00:26:11.261 ] 00:26:11.261 00:45:44 -- common/autotest_common.sh@893 -- # return 0 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.261 00:45:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.520 00:45:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:11.520 "name": "Existed_Raid", 00:26:11.520 "uuid": "919391ee-1300-4bde-9b7a-f084f9e6bd1b", 00:26:11.520 "strip_size_kb": 64, 00:26:11.520 "state": "configuring", 00:26:11.520 "raid_level": "raid5f", 00:26:11.520 "superblock": true, 00:26:11.520 "num_base_bdevs": 4, 00:26:11.520 "num_base_bdevs_discovered": 2, 00:26:11.520 "num_base_bdevs_operational": 4, 00:26:11.520 "base_bdevs_list": [ 00:26:11.520 { 00:26:11.520 "name": "BaseBdev1", 00:26:11.520 "uuid": "91c3520c-ad9e-42fe-b3f1-a558bf5a5dc1", 00:26:11.520 "is_configured": true, 00:26:11.520 "data_offset": 2048, 00:26:11.520 "data_size": 63488 00:26:11.520 }, 00:26:11.520 { 00:26:11.520 "name": "BaseBdev2", 00:26:11.520 "uuid": "30ef0db7-6375-4fa3-85db-201baeda92d0", 00:26:11.520 "is_configured": true, 00:26:11.520 "data_offset": 2048, 00:26:11.520 "data_size": 63488 00:26:11.520 }, 00:26:11.520 { 00:26:11.520 "name": "BaseBdev3", 00:26:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.520 "is_configured": false, 00:26:11.520 "data_offset": 0, 00:26:11.520 "data_size": 0 00:26:11.520 }, 00:26:11.520 { 00:26:11.520 "name": "BaseBdev4", 00:26:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.520 "is_configured": false, 00:26:11.520 "data_offset": 0, 00:26:11.520 "data_size": 0 00:26:11.520 } 00:26:11.520 ] 00:26:11.520 }' 00:26:11.520 00:45:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:11.520 00:45:45 -- common/autotest_common.sh@10 -- # set +x 00:26:12.088 00:45:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:12.346 [2024-04-27 00:45:45.909270] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:12.346 BaseBdev3 00:26:12.346 00:45:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:26:12.346 00:45:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:26:12.346 00:45:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:12.346 00:45:45 -- common/autotest_common.sh@887 -- # local i 00:26:12.346 00:45:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:12.346 00:45:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:12.346 00:45:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.605 00:45:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:12.863 [ 00:26:12.863 { 00:26:12.863 "name": "BaseBdev3", 00:26:12.863 "aliases": [ 00:26:12.863 "6bad82af-bae6-4a36-97f3-576a926e7d65" 00:26:12.863 ], 00:26:12.863 "product_name": "Malloc disk", 00:26:12.863 "block_size": 512, 00:26:12.863 "num_blocks": 65536, 00:26:12.863 "uuid": "6bad82af-bae6-4a36-97f3-576a926e7d65", 00:26:12.863 "assigned_rate_limits": { 00:26:12.863 "rw_ios_per_sec": 0, 00:26:12.863 "rw_mbytes_per_sec": 0, 00:26:12.863 "r_mbytes_per_sec": 0, 00:26:12.863 "w_mbytes_per_sec": 0 00:26:12.863 }, 00:26:12.863 "claimed": true, 00:26:12.864 "claim_type": "exclusive_write", 00:26:12.864 "zoned": false, 00:26:12.864 "supported_io_types": { 00:26:12.864 "read": true, 00:26:12.864 "write": true, 00:26:12.864 "unmap": true, 00:26:12.864 "write_zeroes": true, 00:26:12.864 "flush": true, 00:26:12.864 "reset": true, 00:26:12.864 "compare": false, 00:26:12.864 "compare_and_write": false, 00:26:12.864 "abort": true, 00:26:12.864 "nvme_admin": false, 00:26:12.864 "nvme_io": false 00:26:12.864 }, 00:26:12.864 "memory_domains": [ 00:26:12.864 { 00:26:12.864 "dma_device_id": "system", 00:26:12.864 "dma_device_type": 1 00:26:12.864 }, 00:26:12.864 { 00:26:12.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.864 "dma_device_type": 2 00:26:12.864 } 00:26:12.864 ], 00:26:12.864 "driver_specific": {} 00:26:12.864 } 00:26:12.864 ] 00:26:12.864 00:45:46 -- common/autotest_common.sh@893 -- # return 0 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.864 00:45:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.123 00:45:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:13.123 "name": "Existed_Raid", 00:26:13.123 "uuid": "919391ee-1300-4bde-9b7a-f084f9e6bd1b", 00:26:13.123 "strip_size_kb": 64, 00:26:13.123 "state": "configuring", 00:26:13.123 "raid_level": "raid5f", 00:26:13.123 "superblock": true, 00:26:13.123 "num_base_bdevs": 4, 00:26:13.123 "num_base_bdevs_discovered": 3, 00:26:13.123 "num_base_bdevs_operational": 4, 00:26:13.123 "base_bdevs_list": [ 00:26:13.123 { 00:26:13.123 "name": "BaseBdev1", 00:26:13.123 "uuid": "91c3520c-ad9e-42fe-b3f1-a558bf5a5dc1", 00:26:13.123 "is_configured": true, 00:26:13.123 "data_offset": 2048, 00:26:13.123 "data_size": 63488 00:26:13.123 }, 00:26:13.123 { 00:26:13.123 "name": "BaseBdev2", 00:26:13.123 "uuid": "30ef0db7-6375-4fa3-85db-201baeda92d0", 00:26:13.123 "is_configured": true, 00:26:13.123 "data_offset": 2048, 00:26:13.123 "data_size": 63488 00:26:13.123 }, 00:26:13.123 { 00:26:13.123 "name": "BaseBdev3", 00:26:13.123 "uuid": "6bad82af-bae6-4a36-97f3-576a926e7d65", 00:26:13.123 "is_configured": true, 00:26:13.123 "data_offset": 2048, 00:26:13.123 "data_size": 63488 00:26:13.123 }, 00:26:13.123 { 00:26:13.123 "name": "BaseBdev4", 00:26:13.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.123 "is_configured": false, 00:26:13.123 "data_offset": 0, 00:26:13.123 "data_size": 0 00:26:13.123 } 00:26:13.123 ] 00:26:13.123 }' 00:26:13.123 00:45:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:13.123 00:45:46 -- common/autotest_common.sh@10 -- # set +x 00:26:13.689 00:45:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:14.257 [2024-04-27 00:45:47.537022] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:14.257 [2024-04-27 00:45:47.537647] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:14.257 [2024-04-27 00:45:47.537814] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:14.257 [2024-04-27 00:45:47.538039] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:26:14.257 BaseBdev4 00:26:14.257 [2024-04-27 00:45:47.544629] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:14.257 [2024-04-27 00:45:47.544814] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000011500 00:26:14.257 [2024-04-27 00:45:47.545140] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.257 00:45:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:26:14.257 00:45:47 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:26:14.257 00:45:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:26:14.257 00:45:47 -- common/autotest_common.sh@887 -- # local i 00:26:14.257 00:45:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:26:14.257 00:45:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:26:14.257 00:45:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.257 00:45:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:14.516 [ 00:26:14.516 { 00:26:14.516 "name": "BaseBdev4", 00:26:14.516 "aliases": [ 00:26:14.516 "dc83ae99-97be-4607-9953-4a9aaf3b3933" 00:26:14.516 ], 00:26:14.516 "product_name": "Malloc disk", 00:26:14.516 "block_size": 512, 00:26:14.516 "num_blocks": 65536, 00:26:14.516 "uuid": "dc83ae99-97be-4607-9953-4a9aaf3b3933", 00:26:14.516 "assigned_rate_limits": { 00:26:14.516 "rw_ios_per_sec": 0, 00:26:14.516 "rw_mbytes_per_sec": 0, 00:26:14.516 "r_mbytes_per_sec": 0, 00:26:14.516 "w_mbytes_per_sec": 0 00:26:14.516 }, 00:26:14.516 "claimed": true, 00:26:14.516 "claim_type": "exclusive_write", 00:26:14.516 "zoned": false, 00:26:14.516 "supported_io_types": { 00:26:14.516 "read": true, 00:26:14.516 "write": true, 00:26:14.516 "unmap": true, 00:26:14.517 "write_zeroes": true, 00:26:14.517 "flush": true, 00:26:14.517 "reset": true, 00:26:14.517 "compare": false, 00:26:14.517 "compare_and_write": false, 00:26:14.517 "abort": true, 00:26:14.517 "nvme_admin": false, 00:26:14.517 "nvme_io": false 00:26:14.517 }, 00:26:14.517 "memory_domains": [ 00:26:14.517 { 00:26:14.517 "dma_device_id": "system", 00:26:14.517 "dma_device_type": 1 00:26:14.517 }, 00:26:14.517 { 00:26:14.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.517 "dma_device_type": 2 00:26:14.517 } 00:26:14.517 ], 00:26:14.517 "driver_specific": {} 00:26:14.517 } 00:26:14.517 ] 00:26:14.517 00:45:47 -- common/autotest_common.sh@893 -- # return 0 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.517 00:45:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.776 00:45:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:14.776 "name": "Existed_Raid", 00:26:14.776 "uuid": "919391ee-1300-4bde-9b7a-f084f9e6bd1b", 00:26:14.776 "strip_size_kb": 64, 00:26:14.776 "state": "online", 00:26:14.776 "raid_level": "raid5f", 00:26:14.776 "superblock": true, 00:26:14.776 "num_base_bdevs": 4, 00:26:14.776 "num_base_bdevs_discovered": 4, 00:26:14.776 "num_base_bdevs_operational": 4, 00:26:14.776 "base_bdevs_list": [ 00:26:14.776 { 00:26:14.776 "name": "BaseBdev1", 00:26:14.776 "uuid": "91c3520c-ad9e-42fe-b3f1-a558bf5a5dc1", 00:26:14.776 "is_configured": true, 00:26:14.776 "data_offset": 2048, 00:26:14.776 "data_size": 63488 00:26:14.776 }, 00:26:14.776 { 00:26:14.776 "name": "BaseBdev2", 00:26:14.776 "uuid": "30ef0db7-6375-4fa3-85db-201baeda92d0", 00:26:14.776 "is_configured": true, 00:26:14.776 "data_offset": 2048, 00:26:14.776 "data_size": 63488 00:26:14.776 }, 00:26:14.776 { 00:26:14.776 "name": "BaseBdev3", 00:26:14.776 "uuid": "6bad82af-bae6-4a36-97f3-576a926e7d65", 00:26:14.776 "is_configured": true, 00:26:14.776 "data_offset": 2048, 00:26:14.776 "data_size": 63488 00:26:14.776 }, 00:26:14.776 { 00:26:14.777 "name": "BaseBdev4", 00:26:14.777 "uuid": "dc83ae99-97be-4607-9953-4a9aaf3b3933", 00:26:14.777 "is_configured": true, 00:26:14.777 "data_offset": 2048, 00:26:14.777 "data_size": 63488 00:26:14.777 } 00:26:14.777 ] 00:26:14.777 }' 00:26:14.777 00:45:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:14.777 00:45:48 -- common/autotest_common.sh@10 -- # set +x 00:26:15.344 00:45:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:15.602 [2024-04-27 00:45:49.108936] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.861 00:45:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.119 00:45:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:16.119 "name": "Existed_Raid", 00:26:16.119 "uuid": "919391ee-1300-4bde-9b7a-f084f9e6bd1b", 00:26:16.119 "strip_size_kb": 64, 00:26:16.119 "state": "online", 00:26:16.119 "raid_level": "raid5f", 00:26:16.119 "superblock": true, 00:26:16.119 "num_base_bdevs": 4, 00:26:16.119 "num_base_bdevs_discovered": 3, 00:26:16.119 "num_base_bdevs_operational": 3, 00:26:16.119 "base_bdevs_list": [ 00:26:16.119 { 00:26:16.119 "name": null, 00:26:16.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.119 "is_configured": false, 00:26:16.119 "data_offset": 2048, 00:26:16.119 "data_size": 63488 00:26:16.119 }, 00:26:16.119 { 00:26:16.119 "name": "BaseBdev2", 00:26:16.119 "uuid": "30ef0db7-6375-4fa3-85db-201baeda92d0", 00:26:16.119 "is_configured": true, 00:26:16.119 "data_offset": 2048, 00:26:16.119 "data_size": 63488 00:26:16.119 }, 00:26:16.119 { 00:26:16.119 "name": "BaseBdev3", 00:26:16.119 "uuid": "6bad82af-bae6-4a36-97f3-576a926e7d65", 00:26:16.119 "is_configured": true, 00:26:16.119 "data_offset": 2048, 00:26:16.119 "data_size": 63488 00:26:16.119 }, 00:26:16.119 { 00:26:16.119 "name": "BaseBdev4", 00:26:16.119 "uuid": "dc83ae99-97be-4607-9953-4a9aaf3b3933", 00:26:16.119 "is_configured": true, 00:26:16.119 "data_offset": 2048, 00:26:16.119 "data_size": 63488 00:26:16.119 } 00:26:16.119 ] 00:26:16.119 }' 00:26:16.119 00:45:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:16.119 00:45:49 -- common/autotest_common.sh@10 -- # set +x 00:26:16.687 00:45:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:26:16.687 00:45:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:16.687 00:45:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.687 00:45:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:16.945 00:45:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:16.946 00:45:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:16.946 00:45:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:16.946 [2024-04-27 00:45:50.523223] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:16.946 [2024-04-27 00:45:50.523671] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.204 [2024-04-27 00:45:50.597750] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.204 00:45:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:17.204 00:45:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:17.204 00:45:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.204 00:45:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:17.463 00:45:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:17.463 00:45:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.463 00:45:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:17.722 [2024-04-27 00:45:51.117971] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:17.722 00:45:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:17.722 00:45:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:17.722 00:45:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.722 00:45:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:26:17.989 00:45:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:26:17.989 00:45:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:17.989 00:45:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:18.267 [2024-04-27 00:45:51.718911] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:18.267 [2024-04-27 00:45:51.719299] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name Existed_Raid, state offline 00:26:18.267 00:45:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:26:18.267 00:45:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:26:18.267 00:45:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.267 00:45:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:26:18.526 00:45:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:26:18.526 00:45:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:26:18.526 00:45:52 -- bdev/bdev_raid.sh@287 -- # killprocess 137824 00:26:18.526 00:45:52 -- common/autotest_common.sh@936 -- # '[' -z 137824 ']' 00:26:18.526 00:45:52 -- common/autotest_common.sh@940 -- # kill -0 137824 00:26:18.526 00:45:52 -- common/autotest_common.sh@941 -- # uname 00:26:18.526 00:45:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:18.526 00:45:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 137824 00:26:18.526 00:45:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:18.526 00:45:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:18.526 00:45:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 137824' 00:26:18.526 killing process with pid 137824 00:26:18.526 00:45:52 -- common/autotest_common.sh@955 -- # kill 137824 00:26:18.526 00:45:52 -- common/autotest_common.sh@960 -- # wait 137824 00:26:18.526 [2024-04-27 00:45:52.091229] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:18.526 [2024-04-27 00:45:52.091382] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:19.904 ************************************ 00:26:19.904 END TEST raid5f_state_function_test_sb 00:26:19.904 ************************************ 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:26:19.904 00:26:19.904 real 0m15.608s 00:26:19.904 user 0m27.463s 00:26:19.904 sys 0m2.032s 00:26:19.904 00:45:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:19.904 00:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:26:19.904 00:45:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:19.904 00:45:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:19.904 00:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:19.904 ************************************ 00:26:19.904 START TEST raid5f_superblock_test 00:26:19.904 ************************************ 00:26:19.904 00:45:53 -- common/autotest_common.sh@1111 -- # raid_superblock_test raid5f 4 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=138276 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:19.904 00:45:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 138276 /var/tmp/spdk-raid.sock 00:26:19.904 00:45:53 -- common/autotest_common.sh@817 -- # '[' -z 138276 ']' 00:26:19.904 00:45:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:19.904 00:45:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:19.904 00:45:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:19.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:19.904 00:45:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:19.904 00:45:53 -- common/autotest_common.sh@10 -- # set +x 00:26:19.904 [2024-04-27 00:45:53.360921] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:19.904 [2024-04-27 00:45:53.361340] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138276 ] 00:26:20.162 [2024-04-27 00:45:53.528960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.162 [2024-04-27 00:45:53.735797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.419 [2024-04-27 00:45:53.933736] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:20.985 00:45:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:20.985 00:45:54 -- common/autotest_common.sh@850 -- # return 0 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:20.985 00:45:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:21.242 malloc1 00:26:21.242 00:45:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:21.501 [2024-04-27 00:45:54.858936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:21.501 [2024-04-27 00:45:54.859391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.501 [2024-04-27 00:45:54.859474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:26:21.501 [2024-04-27 00:45:54.859775] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.501 [2024-04-27 00:45:54.862447] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.501 [2024-04-27 00:45:54.862632] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:21.501 pt1 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:21.501 00:45:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:21.759 malloc2 00:26:21.759 00:45:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:21.759 [2024-04-27 00:45:55.329366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:21.759 [2024-04-27 00:45:55.329787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.759 [2024-04-27 00:45:55.329904] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:21.759 [2024-04-27 00:45:55.330256] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.759 [2024-04-27 00:45:55.333065] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.759 [2024-04-27 00:45:55.333263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:21.759 pt2 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:22.017 00:45:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:22.017 malloc3 00:26:22.276 00:45:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:22.276 [2024-04-27 00:45:55.862460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:22.276 [2024-04-27 00:45:55.862978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.276 [2024-04-27 00:45:55.863226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:26:22.534 [2024-04-27 00:45:55.863406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.534 [2024-04-27 00:45:55.866051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.534 [2024-04-27 00:45:55.866244] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:22.534 pt3 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:22.534 00:45:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:22.534 malloc4 00:26:22.793 00:45:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:22.793 [2024-04-27 00:45:56.316388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:22.793 [2024-04-27 00:45:56.316895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.793 [2024-04-27 00:45:56.317110] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:26:22.793 [2024-04-27 00:45:56.317319] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.793 [2024-04-27 00:45:56.320705] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.793 [2024-04-27 00:45:56.321019] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:22.793 pt4 00:26:22.793 00:45:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:26:22.793 00:45:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:26:22.793 00:45:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:23.051 [2024-04-27 00:45:56.553494] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:23.051 [2024-04-27 00:45:56.556018] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:23.051 [2024-04-27 00:45:56.556251] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:23.051 [2024-04-27 00:45:56.556484] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:23.051 [2024-04-27 00:45:56.556907] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:26:23.051 [2024-04-27 00:45:56.557053] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:23.051 [2024-04-27 00:45:56.557253] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:23.051 [2024-04-27 00:45:56.563551] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:26:23.051 [2024-04-27 00:45:56.563700] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:26:23.051 [2024-04-27 00:45:56.564029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.051 00:45:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.309 00:45:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:23.309 "name": "raid_bdev1", 00:26:23.309 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:23.309 "strip_size_kb": 64, 00:26:23.309 "state": "online", 00:26:23.309 "raid_level": "raid5f", 00:26:23.309 "superblock": true, 00:26:23.309 "num_base_bdevs": 4, 00:26:23.309 "num_base_bdevs_discovered": 4, 00:26:23.309 "num_base_bdevs_operational": 4, 00:26:23.309 "base_bdevs_list": [ 00:26:23.309 { 00:26:23.309 "name": "pt1", 00:26:23.309 "uuid": "e6935aa0-6155-5fc2-a97b-b2c0a90b6349", 00:26:23.309 "is_configured": true, 00:26:23.309 "data_offset": 2048, 00:26:23.309 "data_size": 63488 00:26:23.309 }, 00:26:23.309 { 00:26:23.309 "name": "pt2", 00:26:23.309 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:23.309 "is_configured": true, 00:26:23.309 "data_offset": 2048, 00:26:23.309 "data_size": 63488 00:26:23.309 }, 00:26:23.309 { 00:26:23.309 "name": "pt3", 00:26:23.309 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:23.309 "is_configured": true, 00:26:23.309 "data_offset": 2048, 00:26:23.309 "data_size": 63488 00:26:23.309 }, 00:26:23.309 { 00:26:23.309 "name": "pt4", 00:26:23.309 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:23.309 "is_configured": true, 00:26:23.309 "data_offset": 2048, 00:26:23.309 "data_size": 63488 00:26:23.309 } 00:26:23.309 ] 00:26:23.309 }' 00:26:23.309 00:45:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:23.309 00:45:56 -- common/autotest_common.sh@10 -- # set +x 00:26:23.876 00:45:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:23.876 00:45:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:26:24.134 [2024-04-27 00:45:57.671977] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:24.134 00:45:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6418f3da-2252-41f1-a61d-16020b5d8fda 00:26:24.134 00:45:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 6418f3da-2252-41f1-a61d-16020b5d8fda ']' 00:26:24.134 00:45:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:24.392 [2024-04-27 00:45:57.935832] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:24.392 [2024-04-27 00:45:57.936104] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:24.392 [2024-04-27 00:45:57.936348] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:24.392 [2024-04-27 00:45:57.936580] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:24.392 [2024-04-27 00:45:57.936711] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:26:24.392 00:45:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:26:24.392 00:45:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.650 00:45:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:26:24.650 00:45:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:26:24.650 00:45:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:24.650 00:45:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:24.908 00:45:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:24.908 00:45:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:25.166 00:45:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.166 00:45:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:25.425 00:45:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:26:25.425 00:45:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:25.683 00:45:59 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:25.683 00:45:59 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:25.683 00:45:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:26:25.683 00:45:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:25.683 00:45:59 -- common/autotest_common.sh@638 -- # local es=0 00:26:25.683 00:45:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:25.683 00:45:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.683 00:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.683 00:45:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.683 00:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.683 00:45:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.683 00:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.683 00:45:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:25.683 00:45:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:25.683 00:45:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:25.942 [2024-04-27 00:45:59.500114] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:25.942 [2024-04-27 00:45:59.502617] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:25.942 [2024-04-27 00:45:59.502920] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:25.942 [2024-04-27 00:45:59.503122] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:25.942 [2024-04-27 00:45:59.503352] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:26:25.942 [2024-04-27 00:45:59.503595] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:26:25.942 [2024-04-27 00:45:59.503766] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:26:25.942 [2024-04-27 00:45:59.503946] bdev_raid.c:3026:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:26:25.942 [2024-04-27 00:45:59.504117] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:25.942 [2024-04-27 00:45:59.504228] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:26:25.942 request: 00:26:25.942 { 00:26:25.942 "name": "raid_bdev1", 00:26:25.942 "raid_level": "raid5f", 00:26:25.942 "base_bdevs": [ 00:26:25.942 "malloc1", 00:26:25.942 "malloc2", 00:26:25.942 "malloc3", 00:26:25.942 "malloc4" 00:26:25.942 ], 00:26:25.942 "superblock": false, 00:26:25.942 "strip_size_kb": 64, 00:26:25.942 "method": "bdev_raid_create", 00:26:25.942 "req_id": 1 00:26:25.942 } 00:26:25.942 Got JSON-RPC error response 00:26:25.942 response: 00:26:25.942 { 00:26:25.942 "code": -17, 00:26:25.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:25.942 } 00:26:25.942 00:45:59 -- common/autotest_common.sh@641 -- # es=1 00:26:25.942 00:45:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:25.942 00:45:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:25.942 00:45:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:25.942 00:45:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.942 00:45:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:26:26.200 00:45:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:26:26.200 00:45:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:26:26.201 00:45:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:26.459 [2024-04-27 00:45:59.932774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:26.459 [2024-04-27 00:45:59.933212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.459 [2024-04-27 00:45:59.933373] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:26:26.459 [2024-04-27 00:45:59.933506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.459 [2024-04-27 00:45:59.936239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.459 [2024-04-27 00:45:59.936467] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:26.459 [2024-04-27 00:45:59.936728] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:26.459 [2024-04-27 00:45:59.936927] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:26.459 pt1 00:26:26.459 00:45:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:26.459 00:45:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.460 00:45:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.717 00:46:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:26.717 "name": "raid_bdev1", 00:26:26.717 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:26.717 "strip_size_kb": 64, 00:26:26.717 "state": "configuring", 00:26:26.717 "raid_level": "raid5f", 00:26:26.717 "superblock": true, 00:26:26.717 "num_base_bdevs": 4, 00:26:26.717 "num_base_bdevs_discovered": 1, 00:26:26.717 "num_base_bdevs_operational": 4, 00:26:26.717 "base_bdevs_list": [ 00:26:26.717 { 00:26:26.717 "name": "pt1", 00:26:26.717 "uuid": "e6935aa0-6155-5fc2-a97b-b2c0a90b6349", 00:26:26.717 "is_configured": true, 00:26:26.717 "data_offset": 2048, 00:26:26.717 "data_size": 63488 00:26:26.717 }, 00:26:26.717 { 00:26:26.717 "name": null, 00:26:26.717 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:26.717 "is_configured": false, 00:26:26.717 "data_offset": 2048, 00:26:26.717 "data_size": 63488 00:26:26.717 }, 00:26:26.717 { 00:26:26.717 "name": null, 00:26:26.717 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:26.717 "is_configured": false, 00:26:26.717 "data_offset": 2048, 00:26:26.717 "data_size": 63488 00:26:26.717 }, 00:26:26.717 { 00:26:26.717 "name": null, 00:26:26.717 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:26.717 "is_configured": false, 00:26:26.717 "data_offset": 2048, 00:26:26.717 "data_size": 63488 00:26:26.717 } 00:26:26.717 ] 00:26:26.717 }' 00:26:26.717 00:46:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:26.717 00:46:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.283 00:46:00 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:26:27.283 00:46:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:27.542 [2024-04-27 00:46:01.049174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:27.543 [2024-04-27 00:46:01.049538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.543 [2024-04-27 00:46:01.049639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:26:27.543 [2024-04-27 00:46:01.049923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.543 [2024-04-27 00:46:01.050607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.543 [2024-04-27 00:46:01.050846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:27.543 [2024-04-27 00:46:01.051110] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:27.543 [2024-04-27 00:46:01.051273] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:27.543 pt2 00:26:27.543 00:46:01 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:27.802 [2024-04-27 00:46:01.265163] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.802 00:46:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.060 00:46:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:28.060 "name": "raid_bdev1", 00:26:28.060 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:28.060 "strip_size_kb": 64, 00:26:28.060 "state": "configuring", 00:26:28.060 "raid_level": "raid5f", 00:26:28.060 "superblock": true, 00:26:28.060 "num_base_bdevs": 4, 00:26:28.060 "num_base_bdevs_discovered": 1, 00:26:28.060 "num_base_bdevs_operational": 4, 00:26:28.060 "base_bdevs_list": [ 00:26:28.060 { 00:26:28.060 "name": "pt1", 00:26:28.060 "uuid": "e6935aa0-6155-5fc2-a97b-b2c0a90b6349", 00:26:28.060 "is_configured": true, 00:26:28.060 "data_offset": 2048, 00:26:28.060 "data_size": 63488 00:26:28.060 }, 00:26:28.060 { 00:26:28.060 "name": null, 00:26:28.060 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:28.060 "is_configured": false, 00:26:28.060 "data_offset": 2048, 00:26:28.060 "data_size": 63488 00:26:28.060 }, 00:26:28.060 { 00:26:28.060 "name": null, 00:26:28.060 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:28.060 "is_configured": false, 00:26:28.060 "data_offset": 2048, 00:26:28.060 "data_size": 63488 00:26:28.060 }, 00:26:28.060 { 00:26:28.060 "name": null, 00:26:28.060 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:28.060 "is_configured": false, 00:26:28.060 "data_offset": 2048, 00:26:28.060 "data_size": 63488 00:26:28.060 } 00:26:28.060 ] 00:26:28.060 }' 00:26:28.060 00:46:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:28.060 00:46:01 -- common/autotest_common.sh@10 -- # set +x 00:26:28.629 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:26:28.629 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:28.629 00:46:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:28.888 [2024-04-27 00:46:02.393402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:28.888 [2024-04-27 00:46:02.393760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.888 [2024-04-27 00:46:02.393940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:28.888 [2024-04-27 00:46:02.394079] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.888 [2024-04-27 00:46:02.394750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.888 [2024-04-27 00:46:02.395005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:28.888 [2024-04-27 00:46:02.395251] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:28.888 [2024-04-27 00:46:02.395406] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:28.888 pt2 00:26:28.888 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:28.888 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:28.888 00:46:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:29.147 [2024-04-27 00:46:02.657492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:29.147 [2024-04-27 00:46:02.657858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.147 [2024-04-27 00:46:02.658028] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:29.147 [2024-04-27 00:46:02.658168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.147 [2024-04-27 00:46:02.658887] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.147 [2024-04-27 00:46:02.659177] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:29.147 [2024-04-27 00:46:02.659432] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:29.147 [2024-04-27 00:46:02.659578] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:29.147 pt3 00:26:29.147 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:29.147 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:29.147 00:46:02 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:29.406 [2024-04-27 00:46:02.861539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:29.406 [2024-04-27 00:46:02.861906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.406 [2024-04-27 00:46:02.862002] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:26:29.406 [2024-04-27 00:46:02.862277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.406 [2024-04-27 00:46:02.863010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.406 [2024-04-27 00:46:02.863244] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:29.406 [2024-04-27 00:46:02.863493] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:26:29.406 [2024-04-27 00:46:02.863632] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:29.406 [2024-04-27 00:46:02.863982] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:26:29.406 [2024-04-27 00:46:02.864136] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:29.406 [2024-04-27 00:46:02.864336] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:26:29.406 [2024-04-27 00:46:02.870459] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:26:29.406 [2024-04-27 00:46:02.870646] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:26:29.406 [2024-04-27 00:46:02.870989] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.406 pt4 00:26:29.406 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:26:29.406 00:46:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:26:29.406 00:46:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:29.406 00:46:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:29.406 00:46:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:29.406 00:46:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.407 00:46:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.665 00:46:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:29.665 "name": "raid_bdev1", 00:26:29.665 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:29.665 "strip_size_kb": 64, 00:26:29.665 "state": "online", 00:26:29.665 "raid_level": "raid5f", 00:26:29.665 "superblock": true, 00:26:29.665 "num_base_bdevs": 4, 00:26:29.665 "num_base_bdevs_discovered": 4, 00:26:29.665 "num_base_bdevs_operational": 4, 00:26:29.665 "base_bdevs_list": [ 00:26:29.665 { 00:26:29.665 "name": "pt1", 00:26:29.665 "uuid": "e6935aa0-6155-5fc2-a97b-b2c0a90b6349", 00:26:29.665 "is_configured": true, 00:26:29.665 "data_offset": 2048, 00:26:29.665 "data_size": 63488 00:26:29.665 }, 00:26:29.665 { 00:26:29.665 "name": "pt2", 00:26:29.665 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:29.665 "is_configured": true, 00:26:29.665 "data_offset": 2048, 00:26:29.665 "data_size": 63488 00:26:29.665 }, 00:26:29.665 { 00:26:29.665 "name": "pt3", 00:26:29.665 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:29.665 "is_configured": true, 00:26:29.665 "data_offset": 2048, 00:26:29.665 "data_size": 63488 00:26:29.665 }, 00:26:29.665 { 00:26:29.665 "name": "pt4", 00:26:29.665 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:29.665 "is_configured": true, 00:26:29.665 "data_offset": 2048, 00:26:29.665 "data_size": 63488 00:26:29.665 } 00:26:29.665 ] 00:26:29.665 }' 00:26:29.665 00:46:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:29.665 00:46:03 -- common/autotest_common.sh@10 -- # set +x 00:26:30.231 00:46:03 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:30.231 00:46:03 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:26:30.490 [2024-04-27 00:46:03.919190] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:30.490 00:46:03 -- bdev/bdev_raid.sh@430 -- # '[' 6418f3da-2252-41f1-a61d-16020b5d8fda '!=' 6418f3da-2252-41f1-a61d-16020b5d8fda ']' 00:26:30.490 00:46:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:26:30.490 00:46:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:26:30.490 00:46:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:26:30.490 00:46:03 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:30.749 [2024-04-27 00:46:04.191105] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.749 00:46:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.009 00:46:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:31.009 "name": "raid_bdev1", 00:26:31.009 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:31.009 "strip_size_kb": 64, 00:26:31.009 "state": "online", 00:26:31.009 "raid_level": "raid5f", 00:26:31.009 "superblock": true, 00:26:31.009 "num_base_bdevs": 4, 00:26:31.009 "num_base_bdevs_discovered": 3, 00:26:31.009 "num_base_bdevs_operational": 3, 00:26:31.009 "base_bdevs_list": [ 00:26:31.009 { 00:26:31.009 "name": null, 00:26:31.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.009 "is_configured": false, 00:26:31.009 "data_offset": 2048, 00:26:31.009 "data_size": 63488 00:26:31.009 }, 00:26:31.009 { 00:26:31.009 "name": "pt2", 00:26:31.009 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:31.009 "is_configured": true, 00:26:31.009 "data_offset": 2048, 00:26:31.009 "data_size": 63488 00:26:31.009 }, 00:26:31.009 { 00:26:31.009 "name": "pt3", 00:26:31.009 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:31.009 "is_configured": true, 00:26:31.009 "data_offset": 2048, 00:26:31.009 "data_size": 63488 00:26:31.009 }, 00:26:31.009 { 00:26:31.009 "name": "pt4", 00:26:31.009 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:31.009 "is_configured": true, 00:26:31.009 "data_offset": 2048, 00:26:31.009 "data_size": 63488 00:26:31.009 } 00:26:31.009 ] 00:26:31.009 }' 00:26:31.009 00:46:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:31.009 00:46:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.577 00:46:05 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:31.836 [2024-04-27 00:46:05.311276] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:31.836 [2024-04-27 00:46:05.311562] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:31.836 [2024-04-27 00:46:05.311801] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:31.836 [2024-04-27 00:46:05.312030] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:31.836 [2024-04-27 00:46:05.312163] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:26:31.836 00:46:05 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.836 00:46:05 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:26:32.095 00:46:05 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:26:32.095 00:46:05 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:26:32.095 00:46:05 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:26:32.095 00:46:05 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:32.095 00:46:05 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:32.353 00:46:05 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:32.353 00:46:05 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:32.353 00:46:05 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:32.612 00:46:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:32.612 00:46:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:32.612 00:46:06 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:32.870 00:46:06 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:26:32.870 00:46:06 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:26:32.870 00:46:06 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:26:32.870 00:46:06 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:32.870 00:46:06 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:33.129 [2024-04-27 00:46:06.474824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:33.129 [2024-04-27 00:46:06.475150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.129 [2024-04-27 00:46:06.475251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:33.129 [2024-04-27 00:46:06.475529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.129 [2024-04-27 00:46:06.478188] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.129 [2024-04-27 00:46:06.478435] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:33.129 [2024-04-27 00:46:06.478701] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:33.129 [2024-04-27 00:46:06.478917] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:33.129 pt2 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.129 00:46:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.388 00:46:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:33.388 "name": "raid_bdev1", 00:26:33.388 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:33.388 "strip_size_kb": 64, 00:26:33.388 "state": "configuring", 00:26:33.388 "raid_level": "raid5f", 00:26:33.388 "superblock": true, 00:26:33.388 "num_base_bdevs": 4, 00:26:33.388 "num_base_bdevs_discovered": 1, 00:26:33.388 "num_base_bdevs_operational": 3, 00:26:33.388 "base_bdevs_list": [ 00:26:33.388 { 00:26:33.388 "name": null, 00:26:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:33.388 "is_configured": false, 00:26:33.388 "data_offset": 2048, 00:26:33.388 "data_size": 63488 00:26:33.388 }, 00:26:33.388 { 00:26:33.388 "name": "pt2", 00:26:33.388 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:33.388 "is_configured": true, 00:26:33.388 "data_offset": 2048, 00:26:33.388 "data_size": 63488 00:26:33.388 }, 00:26:33.388 { 00:26:33.388 "name": null, 00:26:33.388 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:33.388 "is_configured": false, 00:26:33.388 "data_offset": 2048, 00:26:33.388 "data_size": 63488 00:26:33.388 }, 00:26:33.388 { 00:26:33.388 "name": null, 00:26:33.388 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:33.388 "is_configured": false, 00:26:33.388 "data_offset": 2048, 00:26:33.388 "data_size": 63488 00:26:33.388 } 00:26:33.388 ] 00:26:33.388 }' 00:26:33.388 00:46:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:33.388 00:46:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.955 00:46:07 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:33.955 00:46:07 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:33.955 00:46:07 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:34.213 [2024-04-27 00:46:07.587190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:34.213 [2024-04-27 00:46:07.587556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:34.213 [2024-04-27 00:46:07.587725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:34.213 [2024-04-27 00:46:07.587904] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:34.213 [2024-04-27 00:46:07.588488] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:34.213 [2024-04-27 00:46:07.588687] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:34.213 [2024-04-27 00:46:07.588918] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:34.213 [2024-04-27 00:46:07.589045] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:34.213 pt3 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.213 00:46:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.471 00:46:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:34.471 "name": "raid_bdev1", 00:26:34.471 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:34.471 "strip_size_kb": 64, 00:26:34.471 "state": "configuring", 00:26:34.471 "raid_level": "raid5f", 00:26:34.471 "superblock": true, 00:26:34.471 "num_base_bdevs": 4, 00:26:34.471 "num_base_bdevs_discovered": 2, 00:26:34.471 "num_base_bdevs_operational": 3, 00:26:34.471 "base_bdevs_list": [ 00:26:34.471 { 00:26:34.471 "name": null, 00:26:34.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.471 "is_configured": false, 00:26:34.471 "data_offset": 2048, 00:26:34.471 "data_size": 63488 00:26:34.471 }, 00:26:34.471 { 00:26:34.471 "name": "pt2", 00:26:34.471 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:34.471 "is_configured": true, 00:26:34.471 "data_offset": 2048, 00:26:34.471 "data_size": 63488 00:26:34.471 }, 00:26:34.471 { 00:26:34.471 "name": "pt3", 00:26:34.471 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:34.471 "is_configured": true, 00:26:34.471 "data_offset": 2048, 00:26:34.471 "data_size": 63488 00:26:34.471 }, 00:26:34.471 { 00:26:34.471 "name": null, 00:26:34.471 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:34.471 "is_configured": false, 00:26:34.471 "data_offset": 2048, 00:26:34.471 "data_size": 63488 00:26:34.471 } 00:26:34.471 ] 00:26:34.471 }' 00:26:34.471 00:46:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:34.471 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:26:35.037 00:46:08 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:26:35.037 00:46:08 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:26:35.037 00:46:08 -- bdev/bdev_raid.sh@462 -- # i=3 00:26:35.037 00:46:08 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:35.294 [2024-04-27 00:46:08.739762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:35.294 [2024-04-27 00:46:08.740107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.294 [2024-04-27 00:46:08.740210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:35.294 [2024-04-27 00:46:08.740461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.294 [2024-04-27 00:46:08.741153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.294 [2024-04-27 00:46:08.741410] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:35.294 [2024-04-27 00:46:08.741662] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:26:35.295 [2024-04-27 00:46:08.741813] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:35.295 [2024-04-27 00:46:08.742122] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011880 00:26:35.295 [2024-04-27 00:46:08.742268] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:35.295 [2024-04-27 00:46:08.742545] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:26:35.295 [2024-04-27 00:46:08.749801] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011880 00:26:35.295 [2024-04-27 00:46:08.749985] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011880 00:26:35.295 [2024-04-27 00:46:08.750471] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.295 pt4 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.295 00:46:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.553 00:46:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:35.553 "name": "raid_bdev1", 00:26:35.553 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:35.553 "strip_size_kb": 64, 00:26:35.553 "state": "online", 00:26:35.553 "raid_level": "raid5f", 00:26:35.553 "superblock": true, 00:26:35.553 "num_base_bdevs": 4, 00:26:35.553 "num_base_bdevs_discovered": 3, 00:26:35.553 "num_base_bdevs_operational": 3, 00:26:35.553 "base_bdevs_list": [ 00:26:35.553 { 00:26:35.553 "name": null, 00:26:35.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.553 "is_configured": false, 00:26:35.553 "data_offset": 2048, 00:26:35.553 "data_size": 63488 00:26:35.553 }, 00:26:35.553 { 00:26:35.553 "name": "pt2", 00:26:35.553 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:35.553 "is_configured": true, 00:26:35.553 "data_offset": 2048, 00:26:35.553 "data_size": 63488 00:26:35.553 }, 00:26:35.553 { 00:26:35.553 "name": "pt3", 00:26:35.553 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:35.553 "is_configured": true, 00:26:35.553 "data_offset": 2048, 00:26:35.553 "data_size": 63488 00:26:35.553 }, 00:26:35.553 { 00:26:35.553 "name": "pt4", 00:26:35.553 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:35.553 "is_configured": true, 00:26:35.553 "data_offset": 2048, 00:26:35.553 "data_size": 63488 00:26:35.553 } 00:26:35.553 ] 00:26:35.553 }' 00:26:35.553 00:46:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:35.553 00:46:08 -- common/autotest_common.sh@10 -- # set +x 00:26:36.120 00:46:09 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:26:36.120 00:46:09 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:36.378 [2024-04-27 00:46:09.906765] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.378 [2024-04-27 00:46:09.906992] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:36.378 [2024-04-27 00:46:09.907181] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:36.378 [2024-04-27 00:46:09.907394] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:36.378 [2024-04-27 00:46:09.907526] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011880 name raid_bdev1, state offline 00:26:36.378 00:46:09 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.378 00:46:09 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:26:36.636 00:46:10 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:26:36.636 00:46:10 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:26:36.637 00:46:10 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:36.895 [2024-04-27 00:46:10.339001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:36.895 [2024-04-27 00:46:10.339358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.895 [2024-04-27 00:46:10.339441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:36.895 [2024-04-27 00:46:10.339748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.895 [2024-04-27 00:46:10.341938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.895 [2024-04-27 00:46:10.342123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:36.895 [2024-04-27 00:46:10.342340] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:26:36.895 [2024-04-27 00:46:10.342556] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:36.895 pt1 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.895 00:46:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.154 00:46:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:37.154 "name": "raid_bdev1", 00:26:37.154 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:37.154 "strip_size_kb": 64, 00:26:37.154 "state": "configuring", 00:26:37.154 "raid_level": "raid5f", 00:26:37.154 "superblock": true, 00:26:37.154 "num_base_bdevs": 4, 00:26:37.154 "num_base_bdevs_discovered": 1, 00:26:37.154 "num_base_bdevs_operational": 4, 00:26:37.154 "base_bdevs_list": [ 00:26:37.154 { 00:26:37.154 "name": "pt1", 00:26:37.154 "uuid": "e6935aa0-6155-5fc2-a97b-b2c0a90b6349", 00:26:37.154 "is_configured": true, 00:26:37.154 "data_offset": 2048, 00:26:37.154 "data_size": 63488 00:26:37.154 }, 00:26:37.154 { 00:26:37.154 "name": null, 00:26:37.154 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:37.154 "is_configured": false, 00:26:37.154 "data_offset": 2048, 00:26:37.154 "data_size": 63488 00:26:37.154 }, 00:26:37.154 { 00:26:37.154 "name": null, 00:26:37.154 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:37.154 "is_configured": false, 00:26:37.154 "data_offset": 2048, 00:26:37.154 "data_size": 63488 00:26:37.154 }, 00:26:37.154 { 00:26:37.154 "name": null, 00:26:37.154 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:37.154 "is_configured": false, 00:26:37.154 "data_offset": 2048, 00:26:37.154 "data_size": 63488 00:26:37.154 } 00:26:37.154 ] 00:26:37.154 }' 00:26:37.154 00:46:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:37.154 00:46:10 -- common/autotest_common.sh@10 -- # set +x 00:26:37.728 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:26:37.729 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:37.729 00:46:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:37.996 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:37.996 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:37.996 00:46:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:38.254 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:38.254 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:38.254 00:46:11 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:38.511 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:26:38.511 00:46:11 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:26:38.511 00:46:11 -- bdev/bdev_raid.sh@489 -- # i=3 00:26:38.511 00:46:11 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:38.769 [2024-04-27 00:46:12.163131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:38.769 [2024-04-27 00:46:12.163590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.769 [2024-04-27 00:46:12.163752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:26:38.770 [2024-04-27 00:46:12.163908] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.770 [2024-04-27 00:46:12.164534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.770 [2024-04-27 00:46:12.164710] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:38.770 [2024-04-27 00:46:12.164920] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:26:38.770 [2024-04-27 00:46:12.165025] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:38.770 [2024-04-27 00:46:12.165116] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:38.770 [2024-04-27 00:46:12.165172] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011c00 name raid_bdev1, state configuring 00:26:38.770 [2024-04-27 00:46:12.165378] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:38.770 pt4 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.770 00:46:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.028 00:46:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:39.028 "name": "raid_bdev1", 00:26:39.028 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:39.028 "strip_size_kb": 64, 00:26:39.028 "state": "configuring", 00:26:39.028 "raid_level": "raid5f", 00:26:39.028 "superblock": true, 00:26:39.028 "num_base_bdevs": 4, 00:26:39.028 "num_base_bdevs_discovered": 1, 00:26:39.028 "num_base_bdevs_operational": 3, 00:26:39.028 "base_bdevs_list": [ 00:26:39.028 { 00:26:39.028 "name": null, 00:26:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:39.028 "is_configured": false, 00:26:39.028 "data_offset": 2048, 00:26:39.028 "data_size": 63488 00:26:39.028 }, 00:26:39.028 { 00:26:39.028 "name": null, 00:26:39.028 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:39.028 "is_configured": false, 00:26:39.028 "data_offset": 2048, 00:26:39.028 "data_size": 63488 00:26:39.028 }, 00:26:39.028 { 00:26:39.028 "name": null, 00:26:39.028 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:39.028 "is_configured": false, 00:26:39.028 "data_offset": 2048, 00:26:39.028 "data_size": 63488 00:26:39.028 }, 00:26:39.028 { 00:26:39.028 "name": "pt4", 00:26:39.028 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:39.028 "is_configured": true, 00:26:39.028 "data_offset": 2048, 00:26:39.028 "data_size": 63488 00:26:39.028 } 00:26:39.028 ] 00:26:39.028 }' 00:26:39.028 00:46:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:39.028 00:46:12 -- common/autotest_common.sh@10 -- # set +x 00:26:39.594 00:46:13 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:26:39.594 00:46:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:39.594 00:46:13 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:39.852 [2024-04-27 00:46:13.315381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:39.852 [2024-04-27 00:46:13.315530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.852 [2024-04-27 00:46:13.315578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:39.852 [2024-04-27 00:46:13.315609] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.852 [2024-04-27 00:46:13.316156] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.852 [2024-04-27 00:46:13.316215] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:39.852 [2024-04-27 00:46:13.316326] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:26:39.852 [2024-04-27 00:46:13.316365] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.852 pt2 00:26:39.852 00:46:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:39.852 00:46:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:39.852 00:46:13 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:40.112 [2024-04-27 00:46:13.579583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:40.112 [2024-04-27 00:46:13.579713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.112 [2024-04-27 00:46:13.579786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:26:40.112 [2024-04-27 00:46:13.579845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.112 [2024-04-27 00:46:13.580380] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.112 [2024-04-27 00:46:13.580477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:40.112 [2024-04-27 00:46:13.580596] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:26:40.112 [2024-04-27 00:46:13.580623] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:40.112 [2024-04-27 00:46:13.580774] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011f80 00:26:40.112 [2024-04-27 00:46:13.580787] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:40.112 [2024-04-27 00:46:13.580872] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:26:40.112 [2024-04-27 00:46:13.587974] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011f80 00:26:40.112 [2024-04-27 00:46:13.588000] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011f80 00:26:40.112 [2024-04-27 00:46:13.588254] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:40.112 pt3 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.112 00:46:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.370 00:46:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:40.370 "name": "raid_bdev1", 00:26:40.370 "uuid": "6418f3da-2252-41f1-a61d-16020b5d8fda", 00:26:40.370 "strip_size_kb": 64, 00:26:40.370 "state": "online", 00:26:40.370 "raid_level": "raid5f", 00:26:40.370 "superblock": true, 00:26:40.370 "num_base_bdevs": 4, 00:26:40.370 "num_base_bdevs_discovered": 3, 00:26:40.370 "num_base_bdevs_operational": 3, 00:26:40.370 "base_bdevs_list": [ 00:26:40.370 { 00:26:40.370 "name": null, 00:26:40.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.370 "is_configured": false, 00:26:40.370 "data_offset": 2048, 00:26:40.370 "data_size": 63488 00:26:40.370 }, 00:26:40.370 { 00:26:40.370 "name": "pt2", 00:26:40.370 "uuid": "4bd77297-113f-59b8-aebb-b9a8da398332", 00:26:40.370 "is_configured": true, 00:26:40.370 "data_offset": 2048, 00:26:40.370 "data_size": 63488 00:26:40.370 }, 00:26:40.370 { 00:26:40.370 "name": "pt3", 00:26:40.370 "uuid": "7979980c-876f-5e30-a005-943b22252e33", 00:26:40.370 "is_configured": true, 00:26:40.370 "data_offset": 2048, 00:26:40.370 "data_size": 63488 00:26:40.370 }, 00:26:40.370 { 00:26:40.370 "name": "pt4", 00:26:40.370 "uuid": "7dce0344-0994-5704-adf4-42629439130f", 00:26:40.370 "is_configured": true, 00:26:40.370 "data_offset": 2048, 00:26:40.370 "data_size": 63488 00:26:40.370 } 00:26:40.370 ] 00:26:40.370 }' 00:26:40.370 00:46:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:40.370 00:46:13 -- common/autotest_common.sh@10 -- # set +x 00:26:40.938 00:46:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:40.938 00:46:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:26:41.197 [2024-04-27 00:46:14.705168] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:41.197 00:46:14 -- bdev/bdev_raid.sh@506 -- # '[' 6418f3da-2252-41f1-a61d-16020b5d8fda '!=' 6418f3da-2252-41f1-a61d-16020b5d8fda ']' 00:26:41.197 00:46:14 -- bdev/bdev_raid.sh@511 -- # killprocess 138276 00:26:41.197 00:46:14 -- common/autotest_common.sh@936 -- # '[' -z 138276 ']' 00:26:41.197 00:46:14 -- common/autotest_common.sh@940 -- # kill -0 138276 00:26:41.197 00:46:14 -- common/autotest_common.sh@941 -- # uname 00:26:41.197 00:46:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:41.197 00:46:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138276 00:26:41.197 00:46:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:41.197 00:46:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:41.197 00:46:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138276' 00:26:41.197 killing process with pid 138276 00:26:41.197 00:46:14 -- common/autotest_common.sh@955 -- # kill 138276 00:26:41.197 [2024-04-27 00:46:14.752089] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:41.197 [2024-04-27 00:46:14.752177] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:41.197 [2024-04-27 00:46:14.752253] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:41.197 [2024-04-27 00:46:14.752269] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011f80 name raid_bdev1, state offline 00:26:41.197 00:46:14 -- common/autotest_common.sh@960 -- # wait 138276 00:26:41.456 [2024-04-27 00:46:15.034458] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:42.835 ************************************ 00:26:42.835 END TEST raid5f_superblock_test 00:26:42.835 ************************************ 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@513 -- # return 0 00:26:42.835 00:26:42.835 real 0m22.739s 00:26:42.835 user 0m41.437s 00:26:42.835 sys 0m2.933s 00:26:42.835 00:46:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:42.835 00:46:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:26:42.835 00:46:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:42.835 00:46:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.835 00:46:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.835 ************************************ 00:26:42.835 START TEST raid5f_rebuild_test 00:26:42.835 ************************************ 00:26:42.835 00:46:16 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 false false 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@544 -- # raid_pid=138963 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@545 -- # waitforlisten 138963 /var/tmp/spdk-raid.sock 00:26:42.835 00:46:16 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:42.835 00:46:16 -- common/autotest_common.sh@817 -- # '[' -z 138963 ']' 00:26:42.835 00:46:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:42.835 00:46:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:42.835 00:46:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:42.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:42.835 00:46:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:42.835 00:46:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.835 [2024-04-27 00:46:16.197135] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:26:42.835 [2024-04-27 00:46:16.197318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138963 ] 00:26:42.835 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:42.835 Zero copy mechanism will not be used. 00:26:42.835 [2024-04-27 00:46:16.366777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.094 [2024-04-27 00:46:16.629384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.353 [2024-04-27 00:46:16.831895] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:43.611 00:46:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:43.611 00:46:17 -- common/autotest_common.sh@850 -- # return 0 00:26:43.611 00:46:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:43.611 00:46:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:43.611 00:46:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:43.870 BaseBdev1 00:26:43.870 00:46:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:43.870 00:46:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:43.870 00:46:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:44.129 BaseBdev2 00:26:44.129 00:46:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:44.129 00:46:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:44.129 00:46:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:44.387 BaseBdev3 00:26:44.387 00:46:17 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:26:44.387 00:46:17 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:26:44.387 00:46:17 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:44.647 BaseBdev4 00:26:44.647 00:46:18 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:44.905 spare_malloc 00:26:44.905 00:46:18 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:45.165 spare_delay 00:26:45.165 00:46:18 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:45.424 [2024-04-27 00:46:18.839431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:45.424 [2024-04-27 00:46:18.839534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.424 [2024-04-27 00:46:18.839568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:45.424 [2024-04-27 00:46:18.839616] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.424 [2024-04-27 00:46:18.842101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.424 [2024-04-27 00:46:18.842152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:45.424 spare 00:26:45.424 00:46:18 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:26:45.683 [2024-04-27 00:46:19.047542] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:45.683 [2024-04-27 00:46:19.049673] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:45.683 [2024-04-27 00:46:19.049741] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:45.683 [2024-04-27 00:46:19.049785] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:45.683 [2024-04-27 00:46:19.049870] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:26:45.683 [2024-04-27 00:46:19.049883] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:26:45.683 [2024-04-27 00:46:19.050073] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:45.683 [2024-04-27 00:46:19.056000] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:26:45.683 [2024-04-27 00:46:19.056026] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:26:45.683 [2024-04-27 00:46:19.056234] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.683 00:46:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.942 00:46:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:45.942 "name": "raid_bdev1", 00:26:45.942 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:45.942 "strip_size_kb": 64, 00:26:45.942 "state": "online", 00:26:45.942 "raid_level": "raid5f", 00:26:45.942 "superblock": false, 00:26:45.942 "num_base_bdevs": 4, 00:26:45.942 "num_base_bdevs_discovered": 4, 00:26:45.942 "num_base_bdevs_operational": 4, 00:26:45.942 "base_bdevs_list": [ 00:26:45.942 { 00:26:45.942 "name": "BaseBdev1", 00:26:45.942 "uuid": "00630a1d-4a2a-41f7-bfc0-150f0f39c926", 00:26:45.942 "is_configured": true, 00:26:45.942 "data_offset": 0, 00:26:45.942 "data_size": 65536 00:26:45.942 }, 00:26:45.942 { 00:26:45.942 "name": "BaseBdev2", 00:26:45.942 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:45.942 "is_configured": true, 00:26:45.942 "data_offset": 0, 00:26:45.942 "data_size": 65536 00:26:45.942 }, 00:26:45.942 { 00:26:45.942 "name": "BaseBdev3", 00:26:45.942 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:45.942 "is_configured": true, 00:26:45.942 "data_offset": 0, 00:26:45.942 "data_size": 65536 00:26:45.942 }, 00:26:45.942 { 00:26:45.942 "name": "BaseBdev4", 00:26:45.942 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:45.942 "is_configured": true, 00:26:45.942 "data_offset": 0, 00:26:45.942 "data_size": 65536 00:26:45.942 } 00:26:45.942 ] 00:26:45.942 }' 00:26:45.942 00:46:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:45.942 00:46:19 -- common/autotest_common.sh@10 -- # set +x 00:26:46.510 00:46:19 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:46.510 00:46:19 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:26:46.769 [2024-04-27 00:46:20.139596] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:46.769 00:46:20 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:26:46.769 00:46:20 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:46.769 00:46:20 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.028 00:46:20 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:26:47.028 00:46:20 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:26:47.028 00:46:20 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:26:47.028 00:46:20 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@12 -- # local i 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:47.028 00:46:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:47.028 [2024-04-27 00:46:20.603520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:47.287 /dev/nbd0 00:26:47.287 00:46:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:47.288 00:46:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:47.288 00:46:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:26:47.288 00:46:20 -- common/autotest_common.sh@855 -- # local i 00:26:47.288 00:46:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:26:47.288 00:46:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:26:47.288 00:46:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:26:47.288 00:46:20 -- common/autotest_common.sh@859 -- # break 00:26:47.288 00:46:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:26:47.288 00:46:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:26:47.288 00:46:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:47.288 1+0 records in 00:26:47.288 1+0 records out 00:26:47.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366762 s, 11.2 MB/s 00:26:47.288 00:46:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:47.288 00:46:20 -- common/autotest_common.sh@872 -- # size=4096 00:26:47.288 00:46:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:47.288 00:46:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:26:47.288 00:46:20 -- common/autotest_common.sh@875 -- # return 0 00:26:47.288 00:46:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:47.288 00:46:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:47.288 00:46:20 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:26:47.288 00:46:20 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:26:47.288 00:46:20 -- bdev/bdev_raid.sh@582 -- # echo 192 00:26:47.288 00:46:20 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:26:47.855 512+0 records in 00:26:47.855 512+0 records out 00:26:47.855 100663296 bytes (101 MB, 96 MiB) copied, 0.529954 s, 190 MB/s 00:26:47.855 00:46:21 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:47.855 00:46:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:47.855 00:46:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:47.855 00:46:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:47.855 00:46:21 -- bdev/nbd_common.sh@51 -- # local i 00:26:47.855 00:46:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:47.855 00:46:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:48.114 [2024-04-27 00:46:21.486807] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@41 -- # break 00:26:48.114 00:46:21 -- bdev/nbd_common.sh@45 -- # return 0 00:26:48.114 00:46:21 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:48.372 [2024-04-27 00:46:21.742869] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.373 00:46:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.632 00:46:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:48.632 "name": "raid_bdev1", 00:26:48.632 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:48.632 "strip_size_kb": 64, 00:26:48.632 "state": "online", 00:26:48.632 "raid_level": "raid5f", 00:26:48.632 "superblock": false, 00:26:48.632 "num_base_bdevs": 4, 00:26:48.632 "num_base_bdevs_discovered": 3, 00:26:48.632 "num_base_bdevs_operational": 3, 00:26:48.632 "base_bdevs_list": [ 00:26:48.632 { 00:26:48.632 "name": null, 00:26:48.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.632 "is_configured": false, 00:26:48.632 "data_offset": 0, 00:26:48.632 "data_size": 65536 00:26:48.632 }, 00:26:48.632 { 00:26:48.632 "name": "BaseBdev2", 00:26:48.632 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:48.632 "is_configured": true, 00:26:48.632 "data_offset": 0, 00:26:48.632 "data_size": 65536 00:26:48.632 }, 00:26:48.632 { 00:26:48.632 "name": "BaseBdev3", 00:26:48.632 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:48.632 "is_configured": true, 00:26:48.632 "data_offset": 0, 00:26:48.632 "data_size": 65536 00:26:48.632 }, 00:26:48.632 { 00:26:48.632 "name": "BaseBdev4", 00:26:48.632 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:48.632 "is_configured": true, 00:26:48.632 "data_offset": 0, 00:26:48.632 "data_size": 65536 00:26:48.632 } 00:26:48.632 ] 00:26:48.632 }' 00:26:48.632 00:46:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:48.632 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:26:49.200 00:46:22 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:49.461 [2024-04-27 00:46:22.827140] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:49.461 [2024-04-27 00:46:22.827448] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:49.461 [2024-04-27 00:46:22.838869] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:26:49.461 [2024-04-27 00:46:22.846614] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:49.461 00:46:22 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:26:50.396 00:46:23 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:50.396 00:46:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:50.396 00:46:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:50.396 00:46:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:50.396 00:46:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:50.396 00:46:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.396 00:46:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.655 00:46:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:50.655 "name": "raid_bdev1", 00:26:50.655 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:50.655 "strip_size_kb": 64, 00:26:50.655 "state": "online", 00:26:50.655 "raid_level": "raid5f", 00:26:50.655 "superblock": false, 00:26:50.655 "num_base_bdevs": 4, 00:26:50.655 "num_base_bdevs_discovered": 4, 00:26:50.655 "num_base_bdevs_operational": 4, 00:26:50.655 "process": { 00:26:50.655 "type": "rebuild", 00:26:50.655 "target": "spare", 00:26:50.655 "progress": { 00:26:50.655 "blocks": 23040, 00:26:50.655 "percent": 11 00:26:50.655 } 00:26:50.655 }, 00:26:50.655 "base_bdevs_list": [ 00:26:50.655 { 00:26:50.655 "name": "spare", 00:26:50.655 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:26:50.655 "is_configured": true, 00:26:50.655 "data_offset": 0, 00:26:50.655 "data_size": 65536 00:26:50.655 }, 00:26:50.655 { 00:26:50.655 "name": "BaseBdev2", 00:26:50.655 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:50.655 "is_configured": true, 00:26:50.655 "data_offset": 0, 00:26:50.655 "data_size": 65536 00:26:50.655 }, 00:26:50.655 { 00:26:50.655 "name": "BaseBdev3", 00:26:50.655 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:50.655 "is_configured": true, 00:26:50.655 "data_offset": 0, 00:26:50.655 "data_size": 65536 00:26:50.655 }, 00:26:50.655 { 00:26:50.655 "name": "BaseBdev4", 00:26:50.655 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:50.655 "is_configured": true, 00:26:50.655 "data_offset": 0, 00:26:50.655 "data_size": 65536 00:26:50.655 } 00:26:50.655 ] 00:26:50.655 }' 00:26:50.655 00:46:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:50.655 00:46:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:50.655 00:46:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:50.655 00:46:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:50.655 00:46:24 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:50.913 [2024-04-27 00:46:24.415931] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:50.913 [2024-04-27 00:46:24.458555] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:50.914 [2024-04-27 00:46:24.458883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:51.173 "name": "raid_bdev1", 00:26:51.173 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:51.173 "strip_size_kb": 64, 00:26:51.173 "state": "online", 00:26:51.173 "raid_level": "raid5f", 00:26:51.173 "superblock": false, 00:26:51.173 "num_base_bdevs": 4, 00:26:51.173 "num_base_bdevs_discovered": 3, 00:26:51.173 "num_base_bdevs_operational": 3, 00:26:51.173 "base_bdevs_list": [ 00:26:51.173 { 00:26:51.173 "name": null, 00:26:51.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.173 "is_configured": false, 00:26:51.173 "data_offset": 0, 00:26:51.173 "data_size": 65536 00:26:51.173 }, 00:26:51.173 { 00:26:51.173 "name": "BaseBdev2", 00:26:51.173 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:51.173 "is_configured": true, 00:26:51.173 "data_offset": 0, 00:26:51.173 "data_size": 65536 00:26:51.173 }, 00:26:51.173 { 00:26:51.173 "name": "BaseBdev3", 00:26:51.173 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:51.173 "is_configured": true, 00:26:51.173 "data_offset": 0, 00:26:51.173 "data_size": 65536 00:26:51.173 }, 00:26:51.173 { 00:26:51.173 "name": "BaseBdev4", 00:26:51.173 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:51.173 "is_configured": true, 00:26:51.173 "data_offset": 0, 00:26:51.173 "data_size": 65536 00:26:51.173 } 00:26:51.173 ] 00:26:51.173 }' 00:26:51.173 00:46:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:51.173 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:26:51.739 00:46:25 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:51.739 00:46:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:51.739 00:46:25 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:51.739 00:46:25 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:51.739 00:46:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:51.739 00:46:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.740 00:46:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.307 00:46:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:52.307 "name": "raid_bdev1", 00:26:52.307 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:52.307 "strip_size_kb": 64, 00:26:52.307 "state": "online", 00:26:52.307 "raid_level": "raid5f", 00:26:52.307 "superblock": false, 00:26:52.307 "num_base_bdevs": 4, 00:26:52.307 "num_base_bdevs_discovered": 3, 00:26:52.307 "num_base_bdevs_operational": 3, 00:26:52.307 "base_bdevs_list": [ 00:26:52.307 { 00:26:52.307 "name": null, 00:26:52.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.307 "is_configured": false, 00:26:52.307 "data_offset": 0, 00:26:52.307 "data_size": 65536 00:26:52.307 }, 00:26:52.307 { 00:26:52.307 "name": "BaseBdev2", 00:26:52.307 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:52.307 "is_configured": true, 00:26:52.307 "data_offset": 0, 00:26:52.307 "data_size": 65536 00:26:52.307 }, 00:26:52.307 { 00:26:52.307 "name": "BaseBdev3", 00:26:52.307 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:52.307 "is_configured": true, 00:26:52.307 "data_offset": 0, 00:26:52.307 "data_size": 65536 00:26:52.307 }, 00:26:52.307 { 00:26:52.307 "name": "BaseBdev4", 00:26:52.307 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:52.307 "is_configured": true, 00:26:52.307 "data_offset": 0, 00:26:52.307 "data_size": 65536 00:26:52.307 } 00:26:52.307 ] 00:26:52.307 }' 00:26:52.307 00:46:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:52.307 00:46:25 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:52.307 00:46:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:52.307 00:46:25 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:52.307 00:46:25 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:52.567 [2024-04-27 00:46:25.908509] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:52.567 [2024-04-27 00:46:25.908805] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:52.567 [2024-04-27 00:46:25.921058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:26:52.567 [2024-04-27 00:46:25.929393] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:52.567 00:46:25 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:53.501 00:46:26 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.501 00:46:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:53.501 00:46:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:53.501 00:46:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:53.501 00:46:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:53.501 00:46:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.501 00:46:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.758 00:46:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:53.758 "name": "raid_bdev1", 00:26:53.758 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:53.758 "strip_size_kb": 64, 00:26:53.758 "state": "online", 00:26:53.758 "raid_level": "raid5f", 00:26:53.758 "superblock": false, 00:26:53.758 "num_base_bdevs": 4, 00:26:53.758 "num_base_bdevs_discovered": 4, 00:26:53.758 "num_base_bdevs_operational": 4, 00:26:53.758 "process": { 00:26:53.758 "type": "rebuild", 00:26:53.758 "target": "spare", 00:26:53.758 "progress": { 00:26:53.758 "blocks": 23040, 00:26:53.758 "percent": 11 00:26:53.758 } 00:26:53.758 }, 00:26:53.758 "base_bdevs_list": [ 00:26:53.758 { 00:26:53.758 "name": "spare", 00:26:53.758 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:26:53.758 "is_configured": true, 00:26:53.758 "data_offset": 0, 00:26:53.758 "data_size": 65536 00:26:53.758 }, 00:26:53.758 { 00:26:53.758 "name": "BaseBdev2", 00:26:53.758 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:53.758 "is_configured": true, 00:26:53.758 "data_offset": 0, 00:26:53.758 "data_size": 65536 00:26:53.758 }, 00:26:53.758 { 00:26:53.758 "name": "BaseBdev3", 00:26:53.758 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:53.758 "is_configured": true, 00:26:53.758 "data_offset": 0, 00:26:53.758 "data_size": 65536 00:26:53.758 }, 00:26:53.758 { 00:26:53.758 "name": "BaseBdev4", 00:26:53.758 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:53.758 "is_configured": true, 00:26:53.758 "data_offset": 0, 00:26:53.759 "data_size": 65536 00:26:53.759 } 00:26:53.759 ] 00:26:53.759 }' 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@657 -- # local timeout=737 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.759 00:46:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.017 00:46:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:54.017 "name": "raid_bdev1", 00:26:54.017 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:54.017 "strip_size_kb": 64, 00:26:54.017 "state": "online", 00:26:54.017 "raid_level": "raid5f", 00:26:54.017 "superblock": false, 00:26:54.017 "num_base_bdevs": 4, 00:26:54.017 "num_base_bdevs_discovered": 4, 00:26:54.017 "num_base_bdevs_operational": 4, 00:26:54.017 "process": { 00:26:54.017 "type": "rebuild", 00:26:54.017 "target": "spare", 00:26:54.017 "progress": { 00:26:54.017 "blocks": 30720, 00:26:54.017 "percent": 15 00:26:54.017 } 00:26:54.017 }, 00:26:54.017 "base_bdevs_list": [ 00:26:54.017 { 00:26:54.017 "name": "spare", 00:26:54.017 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:26:54.017 "is_configured": true, 00:26:54.017 "data_offset": 0, 00:26:54.017 "data_size": 65536 00:26:54.017 }, 00:26:54.017 { 00:26:54.017 "name": "BaseBdev2", 00:26:54.017 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:54.017 "is_configured": true, 00:26:54.017 "data_offset": 0, 00:26:54.017 "data_size": 65536 00:26:54.017 }, 00:26:54.017 { 00:26:54.017 "name": "BaseBdev3", 00:26:54.017 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:54.017 "is_configured": true, 00:26:54.017 "data_offset": 0, 00:26:54.017 "data_size": 65536 00:26:54.018 }, 00:26:54.018 { 00:26:54.018 "name": "BaseBdev4", 00:26:54.018 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:54.018 "is_configured": true, 00:26:54.018 "data_offset": 0, 00:26:54.018 "data_size": 65536 00:26:54.018 } 00:26:54.018 ] 00:26:54.018 }' 00:26:54.018 00:46:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:54.276 00:46:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.276 00:46:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:54.276 00:46:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.276 00:46:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.210 00:46:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.469 00:46:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:55.469 "name": "raid_bdev1", 00:26:55.469 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:55.469 "strip_size_kb": 64, 00:26:55.469 "state": "online", 00:26:55.469 "raid_level": "raid5f", 00:26:55.469 "superblock": false, 00:26:55.469 "num_base_bdevs": 4, 00:26:55.469 "num_base_bdevs_discovered": 4, 00:26:55.469 "num_base_bdevs_operational": 4, 00:26:55.469 "process": { 00:26:55.469 "type": "rebuild", 00:26:55.469 "target": "spare", 00:26:55.469 "progress": { 00:26:55.469 "blocks": 55680, 00:26:55.469 "percent": 28 00:26:55.469 } 00:26:55.469 }, 00:26:55.469 "base_bdevs_list": [ 00:26:55.469 { 00:26:55.469 "name": "spare", 00:26:55.469 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:26:55.469 "is_configured": true, 00:26:55.469 "data_offset": 0, 00:26:55.469 "data_size": 65536 00:26:55.469 }, 00:26:55.469 { 00:26:55.469 "name": "BaseBdev2", 00:26:55.469 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:55.469 "is_configured": true, 00:26:55.469 "data_offset": 0, 00:26:55.469 "data_size": 65536 00:26:55.469 }, 00:26:55.469 { 00:26:55.469 "name": "BaseBdev3", 00:26:55.469 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:55.469 "is_configured": true, 00:26:55.469 "data_offset": 0, 00:26:55.469 "data_size": 65536 00:26:55.469 }, 00:26:55.469 { 00:26:55.469 "name": "BaseBdev4", 00:26:55.469 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:55.469 "is_configured": true, 00:26:55.469 "data_offset": 0, 00:26:55.469 "data_size": 65536 00:26:55.469 } 00:26:55.469 ] 00:26:55.469 }' 00:26:55.469 00:46:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:55.469 00:46:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:55.469 00:46:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:55.469 00:46:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:55.469 00:46:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.865 00:46:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:56.865 "name": "raid_bdev1", 00:26:56.865 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:56.865 "strip_size_kb": 64, 00:26:56.865 "state": "online", 00:26:56.866 "raid_level": "raid5f", 00:26:56.866 "superblock": false, 00:26:56.866 "num_base_bdevs": 4, 00:26:56.866 "num_base_bdevs_discovered": 4, 00:26:56.866 "num_base_bdevs_operational": 4, 00:26:56.866 "process": { 00:26:56.866 "type": "rebuild", 00:26:56.866 "target": "spare", 00:26:56.866 "progress": { 00:26:56.866 "blocks": 82560, 00:26:56.866 "percent": 41 00:26:56.866 } 00:26:56.866 }, 00:26:56.866 "base_bdevs_list": [ 00:26:56.866 { 00:26:56.866 "name": "spare", 00:26:56.866 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:26:56.866 "is_configured": true, 00:26:56.866 "data_offset": 0, 00:26:56.866 "data_size": 65536 00:26:56.866 }, 00:26:56.866 { 00:26:56.866 "name": "BaseBdev2", 00:26:56.866 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:56.866 "is_configured": true, 00:26:56.866 "data_offset": 0, 00:26:56.866 "data_size": 65536 00:26:56.866 }, 00:26:56.866 { 00:26:56.866 "name": "BaseBdev3", 00:26:56.866 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:56.866 "is_configured": true, 00:26:56.866 "data_offset": 0, 00:26:56.866 "data_size": 65536 00:26:56.866 }, 00:26:56.866 { 00:26:56.866 "name": "BaseBdev4", 00:26:56.866 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:56.866 "is_configured": true, 00:26:56.866 "data_offset": 0, 00:26:56.866 "data_size": 65536 00:26:56.866 } 00:26:56.866 ] 00:26:56.866 }' 00:26:56.866 00:46:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:56.866 00:46:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:56.866 00:46:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:56.866 00:46:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:56.866 00:46:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:58.252 "name": "raid_bdev1", 00:26:58.252 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:58.252 "strip_size_kb": 64, 00:26:58.252 "state": "online", 00:26:58.252 "raid_level": "raid5f", 00:26:58.252 "superblock": false, 00:26:58.252 "num_base_bdevs": 4, 00:26:58.252 "num_base_bdevs_discovered": 4, 00:26:58.252 "num_base_bdevs_operational": 4, 00:26:58.252 "process": { 00:26:58.252 "type": "rebuild", 00:26:58.252 "target": "spare", 00:26:58.252 "progress": { 00:26:58.252 "blocks": 107520, 00:26:58.252 "percent": 54 00:26:58.252 } 00:26:58.252 }, 00:26:58.252 "base_bdevs_list": [ 00:26:58.252 { 00:26:58.252 "name": "spare", 00:26:58.252 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:26:58.252 "is_configured": true, 00:26:58.252 "data_offset": 0, 00:26:58.252 "data_size": 65536 00:26:58.252 }, 00:26:58.252 { 00:26:58.252 "name": "BaseBdev2", 00:26:58.252 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:58.252 "is_configured": true, 00:26:58.252 "data_offset": 0, 00:26:58.252 "data_size": 65536 00:26:58.252 }, 00:26:58.252 { 00:26:58.252 "name": "BaseBdev3", 00:26:58.252 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:58.252 "is_configured": true, 00:26:58.252 "data_offset": 0, 00:26:58.252 "data_size": 65536 00:26:58.252 }, 00:26:58.252 { 00:26:58.252 "name": "BaseBdev4", 00:26:58.252 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:58.252 "is_configured": true, 00:26:58.252 "data_offset": 0, 00:26:58.252 "data_size": 65536 00:26:58.252 } 00:26:58.252 ] 00:26:58.252 }' 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.252 00:46:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.188 00:46:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.447 00:46:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:59.447 "name": "raid_bdev1", 00:26:59.447 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:26:59.447 "strip_size_kb": 64, 00:26:59.447 "state": "online", 00:26:59.447 "raid_level": "raid5f", 00:26:59.447 "superblock": false, 00:26:59.447 "num_base_bdevs": 4, 00:26:59.447 "num_base_bdevs_discovered": 4, 00:26:59.447 "num_base_bdevs_operational": 4, 00:26:59.447 "process": { 00:26:59.447 "type": "rebuild", 00:26:59.447 "target": "spare", 00:26:59.447 "progress": { 00:26:59.447 "blocks": 134400, 00:26:59.447 "percent": 68 00:26:59.447 } 00:26:59.447 }, 00:26:59.447 "base_bdevs_list": [ 00:26:59.447 { 00:26:59.447 "name": "spare", 00:26:59.447 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:26:59.447 "is_configured": true, 00:26:59.447 "data_offset": 0, 00:26:59.447 "data_size": 65536 00:26:59.447 }, 00:26:59.447 { 00:26:59.447 "name": "BaseBdev2", 00:26:59.447 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:26:59.447 "is_configured": true, 00:26:59.447 "data_offset": 0, 00:26:59.447 "data_size": 65536 00:26:59.447 }, 00:26:59.447 { 00:26:59.447 "name": "BaseBdev3", 00:26:59.447 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:26:59.447 "is_configured": true, 00:26:59.447 "data_offset": 0, 00:26:59.447 "data_size": 65536 00:26:59.447 }, 00:26:59.447 { 00:26:59.447 "name": "BaseBdev4", 00:26:59.447 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:26:59.447 "is_configured": true, 00:26:59.447 "data_offset": 0, 00:26:59.447 "data_size": 65536 00:26:59.447 } 00:26:59.447 ] 00:26:59.447 }' 00:26:59.447 00:46:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:59.710 00:46:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:59.710 00:46:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:59.710 00:46:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:59.710 00:46:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.644 00:46:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.903 00:46:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:00.903 "name": "raid_bdev1", 00:27:00.903 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:27:00.903 "strip_size_kb": 64, 00:27:00.903 "state": "online", 00:27:00.903 "raid_level": "raid5f", 00:27:00.903 "superblock": false, 00:27:00.903 "num_base_bdevs": 4, 00:27:00.903 "num_base_bdevs_discovered": 4, 00:27:00.903 "num_base_bdevs_operational": 4, 00:27:00.903 "process": { 00:27:00.903 "type": "rebuild", 00:27:00.903 "target": "spare", 00:27:00.903 "progress": { 00:27:00.903 "blocks": 159360, 00:27:00.903 "percent": 81 00:27:00.903 } 00:27:00.903 }, 00:27:00.903 "base_bdevs_list": [ 00:27:00.903 { 00:27:00.903 "name": "spare", 00:27:00.903 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:27:00.903 "is_configured": true, 00:27:00.903 "data_offset": 0, 00:27:00.903 "data_size": 65536 00:27:00.903 }, 00:27:00.903 { 00:27:00.903 "name": "BaseBdev2", 00:27:00.903 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:27:00.903 "is_configured": true, 00:27:00.903 "data_offset": 0, 00:27:00.903 "data_size": 65536 00:27:00.903 }, 00:27:00.903 { 00:27:00.903 "name": "BaseBdev3", 00:27:00.903 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:27:00.903 "is_configured": true, 00:27:00.903 "data_offset": 0, 00:27:00.903 "data_size": 65536 00:27:00.903 }, 00:27:00.903 { 00:27:00.903 "name": "BaseBdev4", 00:27:00.903 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:27:00.903 "is_configured": true, 00:27:00.903 "data_offset": 0, 00:27:00.903 "data_size": 65536 00:27:00.903 } 00:27:00.903 ] 00:27:00.903 }' 00:27:00.903 00:46:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:00.903 00:46:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:00.903 00:46:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:00.903 00:46:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:00.903 00:46:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:02.279 "name": "raid_bdev1", 00:27:02.279 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:27:02.279 "strip_size_kb": 64, 00:27:02.279 "state": "online", 00:27:02.279 "raid_level": "raid5f", 00:27:02.279 "superblock": false, 00:27:02.279 "num_base_bdevs": 4, 00:27:02.279 "num_base_bdevs_discovered": 4, 00:27:02.279 "num_base_bdevs_operational": 4, 00:27:02.279 "process": { 00:27:02.279 "type": "rebuild", 00:27:02.279 "target": "spare", 00:27:02.279 "progress": { 00:27:02.279 "blocks": 184320, 00:27:02.279 "percent": 93 00:27:02.279 } 00:27:02.279 }, 00:27:02.279 "base_bdevs_list": [ 00:27:02.279 { 00:27:02.279 "name": "spare", 00:27:02.279 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:27:02.279 "is_configured": true, 00:27:02.279 "data_offset": 0, 00:27:02.279 "data_size": 65536 00:27:02.279 }, 00:27:02.279 { 00:27:02.279 "name": "BaseBdev2", 00:27:02.279 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:27:02.279 "is_configured": true, 00:27:02.279 "data_offset": 0, 00:27:02.279 "data_size": 65536 00:27:02.279 }, 00:27:02.279 { 00:27:02.279 "name": "BaseBdev3", 00:27:02.279 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:27:02.279 "is_configured": true, 00:27:02.279 "data_offset": 0, 00:27:02.279 "data_size": 65536 00:27:02.279 }, 00:27:02.279 { 00:27:02.279 "name": "BaseBdev4", 00:27:02.279 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:27:02.279 "is_configured": true, 00:27:02.279 "data_offset": 0, 00:27:02.279 "data_size": 65536 00:27:02.279 } 00:27:02.279 ] 00:27:02.279 }' 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:02.279 00:46:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:02.846 [2024-04-27 00:46:36.304748] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:02.846 [2024-04-27 00:46:36.305178] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:02.846 [2024-04-27 00:46:36.305443] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:03.413 "name": "raid_bdev1", 00:27:03.413 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:27:03.413 "strip_size_kb": 64, 00:27:03.413 "state": "online", 00:27:03.413 "raid_level": "raid5f", 00:27:03.413 "superblock": false, 00:27:03.413 "num_base_bdevs": 4, 00:27:03.413 "num_base_bdevs_discovered": 4, 00:27:03.413 "num_base_bdevs_operational": 4, 00:27:03.413 "base_bdevs_list": [ 00:27:03.413 { 00:27:03.413 "name": "spare", 00:27:03.413 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:27:03.413 "is_configured": true, 00:27:03.413 "data_offset": 0, 00:27:03.413 "data_size": 65536 00:27:03.413 }, 00:27:03.413 { 00:27:03.413 "name": "BaseBdev2", 00:27:03.413 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:27:03.413 "is_configured": true, 00:27:03.413 "data_offset": 0, 00:27:03.413 "data_size": 65536 00:27:03.413 }, 00:27:03.413 { 00:27:03.413 "name": "BaseBdev3", 00:27:03.413 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:27:03.413 "is_configured": true, 00:27:03.413 "data_offset": 0, 00:27:03.413 "data_size": 65536 00:27:03.413 }, 00:27:03.413 { 00:27:03.413 "name": "BaseBdev4", 00:27:03.413 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:27:03.413 "is_configured": true, 00:27:03.413 "data_offset": 0, 00:27:03.413 "data_size": 65536 00:27:03.413 } 00:27:03.413 ] 00:27:03.413 }' 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:03.413 00:46:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@660 -- # break 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.672 00:46:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:03.931 "name": "raid_bdev1", 00:27:03.931 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:27:03.931 "strip_size_kb": 64, 00:27:03.931 "state": "online", 00:27:03.931 "raid_level": "raid5f", 00:27:03.931 "superblock": false, 00:27:03.931 "num_base_bdevs": 4, 00:27:03.931 "num_base_bdevs_discovered": 4, 00:27:03.931 "num_base_bdevs_operational": 4, 00:27:03.931 "base_bdevs_list": [ 00:27:03.931 { 00:27:03.931 "name": "spare", 00:27:03.931 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:27:03.931 "is_configured": true, 00:27:03.931 "data_offset": 0, 00:27:03.931 "data_size": 65536 00:27:03.931 }, 00:27:03.931 { 00:27:03.931 "name": "BaseBdev2", 00:27:03.931 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:27:03.931 "is_configured": true, 00:27:03.931 "data_offset": 0, 00:27:03.931 "data_size": 65536 00:27:03.931 }, 00:27:03.931 { 00:27:03.931 "name": "BaseBdev3", 00:27:03.931 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:27:03.931 "is_configured": true, 00:27:03.931 "data_offset": 0, 00:27:03.931 "data_size": 65536 00:27:03.931 }, 00:27:03.931 { 00:27:03.931 "name": "BaseBdev4", 00:27:03.931 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:27:03.931 "is_configured": true, 00:27:03.931 "data_offset": 0, 00:27:03.931 "data_size": 65536 00:27:03.931 } 00:27:03.931 ] 00:27:03.931 }' 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.931 00:46:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.190 00:46:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:04.190 "name": "raid_bdev1", 00:27:04.190 "uuid": "c2196132-d12c-4c0f-9d38-5083c0e1d3c4", 00:27:04.190 "strip_size_kb": 64, 00:27:04.190 "state": "online", 00:27:04.190 "raid_level": "raid5f", 00:27:04.190 "superblock": false, 00:27:04.190 "num_base_bdevs": 4, 00:27:04.190 "num_base_bdevs_discovered": 4, 00:27:04.190 "num_base_bdevs_operational": 4, 00:27:04.190 "base_bdevs_list": [ 00:27:04.190 { 00:27:04.190 "name": "spare", 00:27:04.190 "uuid": "d7446c63-5de4-5041-8841-d9ebcc8a6069", 00:27:04.190 "is_configured": true, 00:27:04.190 "data_offset": 0, 00:27:04.190 "data_size": 65536 00:27:04.190 }, 00:27:04.190 { 00:27:04.190 "name": "BaseBdev2", 00:27:04.190 "uuid": "59c862f3-f864-43b9-9a9b-bf4bc8adcccb", 00:27:04.190 "is_configured": true, 00:27:04.190 "data_offset": 0, 00:27:04.190 "data_size": 65536 00:27:04.190 }, 00:27:04.190 { 00:27:04.190 "name": "BaseBdev3", 00:27:04.190 "uuid": "b4baad40-4d7e-412b-b09d-9e79f06f3c9d", 00:27:04.190 "is_configured": true, 00:27:04.190 "data_offset": 0, 00:27:04.190 "data_size": 65536 00:27:04.191 }, 00:27:04.191 { 00:27:04.191 "name": "BaseBdev4", 00:27:04.191 "uuid": "5c60ffe1-e60f-4e0b-91db-6de43c9ca886", 00:27:04.191 "is_configured": true, 00:27:04.191 "data_offset": 0, 00:27:04.191 "data_size": 65536 00:27:04.191 } 00:27:04.191 ] 00:27:04.191 }' 00:27:04.191 00:46:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:04.191 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:27:04.758 00:46:38 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:05.017 [2024-04-27 00:46:38.385452] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:05.017 [2024-04-27 00:46:38.385785] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:05.017 [2024-04-27 00:46:38.386032] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.017 [2024-04-27 00:46:38.386235] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.017 [2024-04-27 00:46:38.386376] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:27:05.017 00:46:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:05.017 00:46:38 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.276 00:46:38 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:05.276 00:46:38 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:05.276 00:46:38 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@12 -- # local i 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:05.277 00:46:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:05.535 /dev/nbd0 00:27:05.535 00:46:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:05.535 00:46:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:05.535 00:46:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:05.535 00:46:38 -- common/autotest_common.sh@855 -- # local i 00:27:05.535 00:46:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:05.535 00:46:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:05.535 00:46:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:05.535 00:46:38 -- common/autotest_common.sh@859 -- # break 00:27:05.535 00:46:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:05.535 00:46:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:05.536 00:46:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:05.536 1+0 records in 00:27:05.536 1+0 records out 00:27:05.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469459 s, 8.7 MB/s 00:27:05.536 00:46:38 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.536 00:46:38 -- common/autotest_common.sh@872 -- # size=4096 00:27:05.536 00:46:38 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.536 00:46:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:05.536 00:46:38 -- common/autotest_common.sh@875 -- # return 0 00:27:05.536 00:46:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:05.536 00:46:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:05.536 00:46:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:05.795 /dev/nbd1 00:27:05.795 00:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:05.795 00:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:05.795 00:46:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:05.795 00:46:39 -- common/autotest_common.sh@855 -- # local i 00:27:05.795 00:46:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:05.795 00:46:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:05.795 00:46:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:05.795 00:46:39 -- common/autotest_common.sh@859 -- # break 00:27:05.795 00:46:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:05.795 00:46:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:05.795 00:46:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:05.795 1+0 records in 00:27:05.795 1+0 records out 00:27:05.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656409 s, 6.2 MB/s 00:27:05.795 00:46:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.795 00:46:39 -- common/autotest_common.sh@872 -- # size=4096 00:27:05.795 00:46:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:05.795 00:46:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:05.795 00:46:39 -- common/autotest_common.sh@875 -- # return 0 00:27:05.795 00:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:05.795 00:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:05.795 00:46:39 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:06.054 00:46:39 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:06.054 00:46:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:06.054 00:46:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:06.054 00:46:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:06.054 00:46:39 -- bdev/nbd_common.sh@51 -- # local i 00:27:06.054 00:46:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:06.054 00:46:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@41 -- # break 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@45 -- # return 0 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:06.313 00:46:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@41 -- # break 00:27:06.572 00:46:40 -- bdev/nbd_common.sh@45 -- # return 0 00:27:06.572 00:46:40 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:27:06.572 00:46:40 -- bdev/bdev_raid.sh@709 -- # killprocess 138963 00:27:06.572 00:46:40 -- common/autotest_common.sh@936 -- # '[' -z 138963 ']' 00:27:06.572 00:46:40 -- common/autotest_common.sh@940 -- # kill -0 138963 00:27:06.572 00:46:40 -- common/autotest_common.sh@941 -- # uname 00:27:06.572 00:46:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:06.572 00:46:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 138963 00:27:06.572 00:46:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:06.572 00:46:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:06.572 00:46:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 138963' 00:27:06.572 killing process with pid 138963 00:27:06.572 00:46:40 -- common/autotest_common.sh@955 -- # kill 138963 00:27:06.572 Received shutdown signal, test time was about 60.000000 seconds 00:27:06.572 00:27:06.572 Latency(us) 00:27:06.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.572 =================================================================================================================== 00:27:06.572 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:06.572 [2024-04-27 00:46:40.047832] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:06.572 00:46:40 -- common/autotest_common.sh@960 -- # wait 138963 00:27:06.830 [2024-04-27 00:46:40.408302] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:08.205 ************************************ 00:27:08.205 END TEST raid5f_rebuild_test 00:27:08.205 ************************************ 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:08.205 00:27:08.205 real 0m25.377s 00:27:08.205 user 0m36.872s 00:27:08.205 sys 0m2.888s 00:27:08.205 00:46:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:08.205 00:46:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:27:08.205 00:46:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:27:08.205 00:46:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:08.205 00:46:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.205 ************************************ 00:27:08.205 START TEST raid5f_rebuild_test_sb 00:27:08.205 ************************************ 00:27:08.205 00:46:41 -- common/autotest_common.sh@1111 -- # raid_rebuild_test raid5f 4 true false 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=139584 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139584 /var/tmp/spdk-raid.sock 00:27:08.205 00:46:41 -- common/autotest_common.sh@817 -- # '[' -z 139584 ']' 00:27:08.205 00:46:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:08.205 00:46:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:08.205 00:46:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:08.205 00:46:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:08.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:08.206 00:46:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:08.206 00:46:41 -- common/autotest_common.sh@10 -- # set +x 00:27:08.206 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:08.206 Zero copy mechanism will not be used. 00:27:08.206 [2024-04-27 00:46:41.657912] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:08.206 [2024-04-27 00:46:41.658063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139584 ] 00:27:08.465 [2024-04-27 00:46:41.814104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.465 [2024-04-27 00:46:42.026400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.724 [2024-04-27 00:46:42.222250] bdev_raid.c:1422:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:09.291 00:46:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:09.291 00:46:42 -- common/autotest_common.sh@850 -- # return 0 00:27:09.291 00:46:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:09.291 00:46:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:09.291 00:46:42 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:09.558 BaseBdev1_malloc 00:27:09.558 00:46:42 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:09.822 [2024-04-27 00:46:43.159949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:09.822 [2024-04-27 00:46:43.160053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.822 [2024-04-27 00:46:43.160088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:09.822 [2024-04-27 00:46:43.160143] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.822 [2024-04-27 00:46:43.162590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.822 [2024-04-27 00:46:43.162638] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:09.822 BaseBdev1 00:27:09.822 00:46:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:09.822 00:46:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:09.822 00:46:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:10.080 BaseBdev2_malloc 00:27:10.080 00:46:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:10.080 [2024-04-27 00:46:43.620689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:10.080 [2024-04-27 00:46:43.620803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.080 [2024-04-27 00:46:43.620847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:10.080 [2024-04-27 00:46:43.620901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.080 [2024-04-27 00:46:43.623245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.080 [2024-04-27 00:46:43.623294] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:10.080 BaseBdev2 00:27:10.080 00:46:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:10.080 00:46:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:10.080 00:46:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:10.339 BaseBdev3_malloc 00:27:10.339 00:46:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:10.598 [2024-04-27 00:46:44.088164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:10.598 [2024-04-27 00:46:44.088259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.598 [2024-04-27 00:46:44.088308] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:10.598 [2024-04-27 00:46:44.088361] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.598 [2024-04-27 00:46:44.090678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.598 [2024-04-27 00:46:44.090730] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:10.598 BaseBdev3 00:27:10.598 00:46:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:27:10.598 00:46:44 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:27:10.598 00:46:44 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:10.857 BaseBdev4_malloc 00:27:10.857 00:46:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:11.115 [2024-04-27 00:46:44.533599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:11.115 [2024-04-27 00:46:44.533750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.115 [2024-04-27 00:46:44.533826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:11.115 [2024-04-27 00:46:44.533953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.116 [2024-04-27 00:46:44.537219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.116 [2024-04-27 00:46:44.537307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:11.116 BaseBdev4 00:27:11.116 00:46:44 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:11.374 spare_malloc 00:27:11.374 00:46:44 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:11.633 spare_delay 00:27:11.633 00:46:45 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:11.633 [2024-04-27 00:46:45.211844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:11.633 [2024-04-27 00:46:45.211971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.633 [2024-04-27 00:46:45.212047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:11.633 [2024-04-27 00:46:45.212163] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.633 [2024-04-27 00:46:45.214562] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.633 [2024-04-27 00:46:45.214653] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:11.633 spare 00:27:11.892 00:46:45 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:27:11.892 [2024-04-27 00:46:45.427981] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:11.892 [2024-04-27 00:46:45.430289] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:11.892 [2024-04-27 00:46:45.430430] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:11.892 [2024-04-27 00:46:45.430580] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:11.892 [2024-04-27 00:46:45.430959] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000010e00 00:27:11.892 [2024-04-27 00:46:45.430988] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:11.892 [2024-04-27 00:46:45.431249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:11.892 [2024-04-27 00:46:45.437623] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000010e00 00:27:11.892 [2024-04-27 00:46:45.437652] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000010e00 00:27:11.892 [2024-04-27 00:46:45.437966] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.892 00:46:45 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:11.892 00:46:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:11.892 00:46:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:11.892 00:46:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:11.892 00:46:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:11.893 00:46:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:11.893 00:46:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:11.893 00:46:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:11.893 00:46:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:11.893 00:46:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:11.893 00:46:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.893 00:46:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.152 00:46:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:12.152 "name": "raid_bdev1", 00:27:12.152 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:12.152 "strip_size_kb": 64, 00:27:12.152 "state": "online", 00:27:12.152 "raid_level": "raid5f", 00:27:12.152 "superblock": true, 00:27:12.152 "num_base_bdevs": 4, 00:27:12.152 "num_base_bdevs_discovered": 4, 00:27:12.152 "num_base_bdevs_operational": 4, 00:27:12.152 "base_bdevs_list": [ 00:27:12.152 { 00:27:12.152 "name": "BaseBdev1", 00:27:12.152 "uuid": "83843115-0246-547a-acad-aa36e3848eb6", 00:27:12.152 "is_configured": true, 00:27:12.152 "data_offset": 2048, 00:27:12.152 "data_size": 63488 00:27:12.152 }, 00:27:12.152 { 00:27:12.152 "name": "BaseBdev2", 00:27:12.152 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:12.152 "is_configured": true, 00:27:12.152 "data_offset": 2048, 00:27:12.152 "data_size": 63488 00:27:12.152 }, 00:27:12.152 { 00:27:12.152 "name": "BaseBdev3", 00:27:12.152 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:12.152 "is_configured": true, 00:27:12.152 "data_offset": 2048, 00:27:12.152 "data_size": 63488 00:27:12.152 }, 00:27:12.152 { 00:27:12.152 "name": "BaseBdev4", 00:27:12.152 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:12.152 "is_configured": true, 00:27:12.152 "data_offset": 2048, 00:27:12.152 "data_size": 63488 00:27:12.152 } 00:27:12.152 ] 00:27:12.152 }' 00:27:12.152 00:46:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:12.152 00:46:45 -- common/autotest_common.sh@10 -- # set +x 00:27:13.089 00:46:46 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:13.089 00:46:46 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:27:13.089 [2024-04-27 00:46:46.533688] bdev_raid.c:1123:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.089 00:46:46 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:27:13.089 00:46:46 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:13.089 00:46:46 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.348 00:46:46 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:27:13.348 00:46:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:27:13.348 00:46:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:27:13.348 00:46:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@12 -- # local i 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:13.348 00:46:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:13.607 [2024-04-27 00:46:47.009704] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:13.607 /dev/nbd0 00:27:13.607 00:46:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:13.607 00:46:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:13.607 00:46:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:13.607 00:46:47 -- common/autotest_common.sh@855 -- # local i 00:27:13.607 00:46:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:13.607 00:46:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:13.607 00:46:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:13.607 00:46:47 -- common/autotest_common.sh@859 -- # break 00:27:13.607 00:46:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:13.607 00:46:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:13.607 00:46:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:13.607 1+0 records in 00:27:13.607 1+0 records out 00:27:13.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242909 s, 16.9 MB/s 00:27:13.607 00:46:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.607 00:46:47 -- common/autotest_common.sh@872 -- # size=4096 00:27:13.607 00:46:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:13.607 00:46:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:13.607 00:46:47 -- common/autotest_common.sh@875 -- # return 0 00:27:13.607 00:46:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:13.607 00:46:47 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:13.607 00:46:47 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:27:13.607 00:46:47 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:27:13.607 00:46:47 -- bdev/bdev_raid.sh@582 -- # echo 192 00:27:13.607 00:46:47 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:27:14.174 496+0 records in 00:27:14.175 496+0 records out 00:27:14.175 97517568 bytes (98 MB, 93 MiB) copied, 0.508178 s, 192 MB/s 00:27:14.175 00:46:47 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:14.175 00:46:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:14.175 00:46:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:14.175 00:46:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:14.175 00:46:47 -- bdev/nbd_common.sh@51 -- # local i 00:27:14.175 00:46:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.175 00:46:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:14.433 [2024-04-27 00:46:47.836158] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@41 -- # break 00:27:14.433 00:46:47 -- bdev/nbd_common.sh@45 -- # return 0 00:27:14.433 00:46:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:14.691 [2024-04-27 00:46:48.103585] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:14.691 00:46:48 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:14.691 00:46:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:14.691 00:46:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:14.691 00:46:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.692 00:46:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.950 00:46:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:14.950 "name": "raid_bdev1", 00:27:14.950 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:14.950 "strip_size_kb": 64, 00:27:14.950 "state": "online", 00:27:14.950 "raid_level": "raid5f", 00:27:14.950 "superblock": true, 00:27:14.950 "num_base_bdevs": 4, 00:27:14.950 "num_base_bdevs_discovered": 3, 00:27:14.950 "num_base_bdevs_operational": 3, 00:27:14.950 "base_bdevs_list": [ 00:27:14.950 { 00:27:14.950 "name": null, 00:27:14.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:14.950 "is_configured": false, 00:27:14.950 "data_offset": 2048, 00:27:14.950 "data_size": 63488 00:27:14.950 }, 00:27:14.950 { 00:27:14.950 "name": "BaseBdev2", 00:27:14.950 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:14.950 "is_configured": true, 00:27:14.950 "data_offset": 2048, 00:27:14.950 "data_size": 63488 00:27:14.950 }, 00:27:14.950 { 00:27:14.950 "name": "BaseBdev3", 00:27:14.950 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:14.950 "is_configured": true, 00:27:14.950 "data_offset": 2048, 00:27:14.950 "data_size": 63488 00:27:14.950 }, 00:27:14.950 { 00:27:14.950 "name": "BaseBdev4", 00:27:14.950 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:14.950 "is_configured": true, 00:27:14.950 "data_offset": 2048, 00:27:14.950 "data_size": 63488 00:27:14.950 } 00:27:14.950 ] 00:27:14.950 }' 00:27:14.950 00:46:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:14.950 00:46:48 -- common/autotest_common.sh@10 -- # set +x 00:27:15.517 00:46:48 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:15.776 [2024-04-27 00:46:49.182211] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:15.776 [2024-04-27 00:46:49.182277] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:15.776 [2024-04-27 00:46:49.193992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:27:15.776 [2024-04-27 00:46:49.201721] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:15.776 00:46:49 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:27:16.711 00:46:50 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.711 00:46:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:16.711 00:46:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:16.711 00:46:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:16.711 00:46:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:16.711 00:46:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.711 00:46:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.971 00:46:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:16.971 "name": "raid_bdev1", 00:27:16.971 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:16.971 "strip_size_kb": 64, 00:27:16.971 "state": "online", 00:27:16.971 "raid_level": "raid5f", 00:27:16.971 "superblock": true, 00:27:16.971 "num_base_bdevs": 4, 00:27:16.971 "num_base_bdevs_discovered": 4, 00:27:16.971 "num_base_bdevs_operational": 4, 00:27:16.971 "process": { 00:27:16.971 "type": "rebuild", 00:27:16.971 "target": "spare", 00:27:16.971 "progress": { 00:27:16.971 "blocks": 23040, 00:27:16.971 "percent": 12 00:27:16.971 } 00:27:16.971 }, 00:27:16.971 "base_bdevs_list": [ 00:27:16.971 { 00:27:16.971 "name": "spare", 00:27:16.971 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:16.971 "is_configured": true, 00:27:16.971 "data_offset": 2048, 00:27:16.971 "data_size": 63488 00:27:16.971 }, 00:27:16.971 { 00:27:16.971 "name": "BaseBdev2", 00:27:16.971 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:16.971 "is_configured": true, 00:27:16.971 "data_offset": 2048, 00:27:16.971 "data_size": 63488 00:27:16.971 }, 00:27:16.971 { 00:27:16.971 "name": "BaseBdev3", 00:27:16.971 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:16.971 "is_configured": true, 00:27:16.971 "data_offset": 2048, 00:27:16.971 "data_size": 63488 00:27:16.971 }, 00:27:16.971 { 00:27:16.971 "name": "BaseBdev4", 00:27:16.971 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:16.971 "is_configured": true, 00:27:16.971 "data_offset": 2048, 00:27:16.971 "data_size": 63488 00:27:16.971 } 00:27:16.971 ] 00:27:16.971 }' 00:27:16.971 00:46:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:16.971 00:46:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:16.971 00:46:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:17.230 00:46:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:17.230 00:46:50 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:17.230 [2024-04-27 00:46:50.811139] bdev_raid.c:2118:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:17.230 [2024-04-27 00:46:50.816144] bdev_raid.c:2473:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:17.230 [2024-04-27 00:46:50.816235] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:17.488 00:46:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:17.489 00:46:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:17.489 00:46:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:17.489 00:46:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.489 00:46:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.747 00:46:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:17.747 "name": "raid_bdev1", 00:27:17.747 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:17.747 "strip_size_kb": 64, 00:27:17.747 "state": "online", 00:27:17.748 "raid_level": "raid5f", 00:27:17.748 "superblock": true, 00:27:17.748 "num_base_bdevs": 4, 00:27:17.748 "num_base_bdevs_discovered": 3, 00:27:17.748 "num_base_bdevs_operational": 3, 00:27:17.748 "base_bdevs_list": [ 00:27:17.748 { 00:27:17.748 "name": null, 00:27:17.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.748 "is_configured": false, 00:27:17.748 "data_offset": 2048, 00:27:17.748 "data_size": 63488 00:27:17.748 }, 00:27:17.748 { 00:27:17.748 "name": "BaseBdev2", 00:27:17.748 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:17.748 "is_configured": true, 00:27:17.748 "data_offset": 2048, 00:27:17.748 "data_size": 63488 00:27:17.748 }, 00:27:17.748 { 00:27:17.748 "name": "BaseBdev3", 00:27:17.748 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:17.748 "is_configured": true, 00:27:17.748 "data_offset": 2048, 00:27:17.748 "data_size": 63488 00:27:17.748 }, 00:27:17.748 { 00:27:17.748 "name": "BaseBdev4", 00:27:17.748 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:17.748 "is_configured": true, 00:27:17.748 "data_offset": 2048, 00:27:17.748 "data_size": 63488 00:27:17.748 } 00:27:17.748 ] 00:27:17.748 }' 00:27:17.748 00:46:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:17.748 00:46:51 -- common/autotest_common.sh@10 -- # set +x 00:27:18.316 00:46:51 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:18.316 00:46:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:18.316 00:46:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:18.316 00:46:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:18.316 00:46:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:18.316 00:46:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.316 00:46:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.576 00:46:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:18.576 "name": "raid_bdev1", 00:27:18.576 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:18.576 "strip_size_kb": 64, 00:27:18.576 "state": "online", 00:27:18.576 "raid_level": "raid5f", 00:27:18.576 "superblock": true, 00:27:18.576 "num_base_bdevs": 4, 00:27:18.576 "num_base_bdevs_discovered": 3, 00:27:18.576 "num_base_bdevs_operational": 3, 00:27:18.576 "base_bdevs_list": [ 00:27:18.576 { 00:27:18.576 "name": null, 00:27:18.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.576 "is_configured": false, 00:27:18.576 "data_offset": 2048, 00:27:18.576 "data_size": 63488 00:27:18.576 }, 00:27:18.576 { 00:27:18.576 "name": "BaseBdev2", 00:27:18.576 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:18.576 "is_configured": true, 00:27:18.576 "data_offset": 2048, 00:27:18.576 "data_size": 63488 00:27:18.576 }, 00:27:18.576 { 00:27:18.576 "name": "BaseBdev3", 00:27:18.576 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:18.576 "is_configured": true, 00:27:18.576 "data_offset": 2048, 00:27:18.576 "data_size": 63488 00:27:18.576 }, 00:27:18.576 { 00:27:18.576 "name": "BaseBdev4", 00:27:18.576 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:18.576 "is_configured": true, 00:27:18.576 "data_offset": 2048, 00:27:18.576 "data_size": 63488 00:27:18.576 } 00:27:18.576 ] 00:27:18.576 }' 00:27:18.576 00:46:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:18.576 00:46:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:18.576 00:46:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:18.576 00:46:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:18.576 00:46:52 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:18.836 [2024-04-27 00:46:52.388358] bdev_raid.c:3278:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:27:18.836 [2024-04-27 00:46:52.388435] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:18.836 [2024-04-27 00:46:52.398661] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:27:18.836 [2024-04-27 00:46:52.405838] bdev_raid.c:2782:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:18.836 00:46:52 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:20.212 "name": "raid_bdev1", 00:27:20.212 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:20.212 "strip_size_kb": 64, 00:27:20.212 "state": "online", 00:27:20.212 "raid_level": "raid5f", 00:27:20.212 "superblock": true, 00:27:20.212 "num_base_bdevs": 4, 00:27:20.212 "num_base_bdevs_discovered": 4, 00:27:20.212 "num_base_bdevs_operational": 4, 00:27:20.212 "process": { 00:27:20.212 "type": "rebuild", 00:27:20.212 "target": "spare", 00:27:20.212 "progress": { 00:27:20.212 "blocks": 21120, 00:27:20.212 "percent": 11 00:27:20.212 } 00:27:20.212 }, 00:27:20.212 "base_bdevs_list": [ 00:27:20.212 { 00:27:20.212 "name": "spare", 00:27:20.212 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:20.212 "is_configured": true, 00:27:20.212 "data_offset": 2048, 00:27:20.212 "data_size": 63488 00:27:20.212 }, 00:27:20.212 { 00:27:20.212 "name": "BaseBdev2", 00:27:20.212 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:20.212 "is_configured": true, 00:27:20.212 "data_offset": 2048, 00:27:20.212 "data_size": 63488 00:27:20.212 }, 00:27:20.212 { 00:27:20.212 "name": "BaseBdev3", 00:27:20.212 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:20.212 "is_configured": true, 00:27:20.212 "data_offset": 2048, 00:27:20.212 "data_size": 63488 00:27:20.212 }, 00:27:20.212 { 00:27:20.212 "name": "BaseBdev4", 00:27:20.212 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:20.212 "is_configured": true, 00:27:20.212 "data_offset": 2048, 00:27:20.212 "data_size": 63488 00:27:20.212 } 00:27:20.212 ] 00:27:20.212 }' 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:27:20.212 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@657 -- # local timeout=763 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:20.212 00:46:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.470 00:46:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:20.470 "name": "raid_bdev1", 00:27:20.470 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:20.470 "strip_size_kb": 64, 00:27:20.470 "state": "online", 00:27:20.470 "raid_level": "raid5f", 00:27:20.470 "superblock": true, 00:27:20.470 "num_base_bdevs": 4, 00:27:20.470 "num_base_bdevs_discovered": 4, 00:27:20.470 "num_base_bdevs_operational": 4, 00:27:20.470 "process": { 00:27:20.470 "type": "rebuild", 00:27:20.470 "target": "spare", 00:27:20.470 "progress": { 00:27:20.470 "blocks": 26880, 00:27:20.470 "percent": 14 00:27:20.470 } 00:27:20.470 }, 00:27:20.470 "base_bdevs_list": [ 00:27:20.470 { 00:27:20.470 "name": "spare", 00:27:20.470 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:20.470 "is_configured": true, 00:27:20.470 "data_offset": 2048, 00:27:20.470 "data_size": 63488 00:27:20.470 }, 00:27:20.470 { 00:27:20.470 "name": "BaseBdev2", 00:27:20.470 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:20.470 "is_configured": true, 00:27:20.470 "data_offset": 2048, 00:27:20.470 "data_size": 63488 00:27:20.470 }, 00:27:20.470 { 00:27:20.470 "name": "BaseBdev3", 00:27:20.470 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:20.470 "is_configured": true, 00:27:20.470 "data_offset": 2048, 00:27:20.470 "data_size": 63488 00:27:20.470 }, 00:27:20.470 { 00:27:20.470 "name": "BaseBdev4", 00:27:20.470 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:20.470 "is_configured": true, 00:27:20.470 "data_offset": 2048, 00:27:20.470 "data_size": 63488 00:27:20.470 } 00:27:20.470 ] 00:27:20.470 }' 00:27:20.470 00:46:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:20.470 00:46:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:20.470 00:46:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:20.470 00:46:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:20.470 00:46:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:21.849 "name": "raid_bdev1", 00:27:21.849 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:21.849 "strip_size_kb": 64, 00:27:21.849 "state": "online", 00:27:21.849 "raid_level": "raid5f", 00:27:21.849 "superblock": true, 00:27:21.849 "num_base_bdevs": 4, 00:27:21.849 "num_base_bdevs_discovered": 4, 00:27:21.849 "num_base_bdevs_operational": 4, 00:27:21.849 "process": { 00:27:21.849 "type": "rebuild", 00:27:21.849 "target": "spare", 00:27:21.849 "progress": { 00:27:21.849 "blocks": 53760, 00:27:21.849 "percent": 28 00:27:21.849 } 00:27:21.849 }, 00:27:21.849 "base_bdevs_list": [ 00:27:21.849 { 00:27:21.849 "name": "spare", 00:27:21.849 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:21.849 "is_configured": true, 00:27:21.849 "data_offset": 2048, 00:27:21.849 "data_size": 63488 00:27:21.849 }, 00:27:21.849 { 00:27:21.849 "name": "BaseBdev2", 00:27:21.849 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:21.849 "is_configured": true, 00:27:21.849 "data_offset": 2048, 00:27:21.849 "data_size": 63488 00:27:21.849 }, 00:27:21.849 { 00:27:21.849 "name": "BaseBdev3", 00:27:21.849 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:21.849 "is_configured": true, 00:27:21.849 "data_offset": 2048, 00:27:21.849 "data_size": 63488 00:27:21.849 }, 00:27:21.849 { 00:27:21.849 "name": "BaseBdev4", 00:27:21.849 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:21.849 "is_configured": true, 00:27:21.849 "data_offset": 2048, 00:27:21.849 "data_size": 63488 00:27:21.849 } 00:27:21.849 ] 00:27:21.849 }' 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:21.849 00:46:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.786 00:46:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.045 00:46:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:23.045 "name": "raid_bdev1", 00:27:23.045 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:23.045 "strip_size_kb": 64, 00:27:23.045 "state": "online", 00:27:23.045 "raid_level": "raid5f", 00:27:23.045 "superblock": true, 00:27:23.045 "num_base_bdevs": 4, 00:27:23.045 "num_base_bdevs_discovered": 4, 00:27:23.045 "num_base_bdevs_operational": 4, 00:27:23.045 "process": { 00:27:23.045 "type": "rebuild", 00:27:23.045 "target": "spare", 00:27:23.045 "progress": { 00:27:23.045 "blocks": 78720, 00:27:23.045 "percent": 41 00:27:23.045 } 00:27:23.045 }, 00:27:23.045 "base_bdevs_list": [ 00:27:23.045 { 00:27:23.045 "name": "spare", 00:27:23.045 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:23.045 "is_configured": true, 00:27:23.045 "data_offset": 2048, 00:27:23.045 "data_size": 63488 00:27:23.045 }, 00:27:23.045 { 00:27:23.045 "name": "BaseBdev2", 00:27:23.045 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:23.045 "is_configured": true, 00:27:23.045 "data_offset": 2048, 00:27:23.045 "data_size": 63488 00:27:23.045 }, 00:27:23.045 { 00:27:23.045 "name": "BaseBdev3", 00:27:23.045 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:23.045 "is_configured": true, 00:27:23.045 "data_offset": 2048, 00:27:23.045 "data_size": 63488 00:27:23.045 }, 00:27:23.045 { 00:27:23.045 "name": "BaseBdev4", 00:27:23.045 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:23.045 "is_configured": true, 00:27:23.045 "data_offset": 2048, 00:27:23.045 "data_size": 63488 00:27:23.045 } 00:27:23.045 ] 00:27:23.045 }' 00:27:23.045 00:46:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:23.304 00:46:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:23.304 00:46:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:23.304 00:46:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:23.304 00:46:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.241 00:46:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.500 00:46:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:24.500 "name": "raid_bdev1", 00:27:24.500 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:24.500 "strip_size_kb": 64, 00:27:24.500 "state": "online", 00:27:24.500 "raid_level": "raid5f", 00:27:24.500 "superblock": true, 00:27:24.500 "num_base_bdevs": 4, 00:27:24.500 "num_base_bdevs_discovered": 4, 00:27:24.500 "num_base_bdevs_operational": 4, 00:27:24.500 "process": { 00:27:24.500 "type": "rebuild", 00:27:24.500 "target": "spare", 00:27:24.500 "progress": { 00:27:24.500 "blocks": 105600, 00:27:24.500 "percent": 55 00:27:24.500 } 00:27:24.500 }, 00:27:24.500 "base_bdevs_list": [ 00:27:24.500 { 00:27:24.500 "name": "spare", 00:27:24.500 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:24.500 "is_configured": true, 00:27:24.500 "data_offset": 2048, 00:27:24.500 "data_size": 63488 00:27:24.500 }, 00:27:24.500 { 00:27:24.500 "name": "BaseBdev2", 00:27:24.500 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:24.500 "is_configured": true, 00:27:24.500 "data_offset": 2048, 00:27:24.500 "data_size": 63488 00:27:24.500 }, 00:27:24.500 { 00:27:24.500 "name": "BaseBdev3", 00:27:24.500 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:24.500 "is_configured": true, 00:27:24.500 "data_offset": 2048, 00:27:24.500 "data_size": 63488 00:27:24.500 }, 00:27:24.500 { 00:27:24.500 "name": "BaseBdev4", 00:27:24.500 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:24.500 "is_configured": true, 00:27:24.500 "data_offset": 2048, 00:27:24.500 "data_size": 63488 00:27:24.500 } 00:27:24.500 ] 00:27:24.500 }' 00:27:24.500 00:46:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:24.500 00:46:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:24.500 00:46:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:24.500 00:46:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:24.500 00:46:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.878 00:46:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:25.878 "name": "raid_bdev1", 00:27:25.878 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:25.878 "strip_size_kb": 64, 00:27:25.878 "state": "online", 00:27:25.878 "raid_level": "raid5f", 00:27:25.878 "superblock": true, 00:27:25.878 "num_base_bdevs": 4, 00:27:25.878 "num_base_bdevs_discovered": 4, 00:27:25.878 "num_base_bdevs_operational": 4, 00:27:25.878 "process": { 00:27:25.878 "type": "rebuild", 00:27:25.878 "target": "spare", 00:27:25.878 "progress": { 00:27:25.878 "blocks": 130560, 00:27:25.878 "percent": 68 00:27:25.878 } 00:27:25.878 }, 00:27:25.878 "base_bdevs_list": [ 00:27:25.878 { 00:27:25.878 "name": "spare", 00:27:25.878 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:25.878 "is_configured": true, 00:27:25.878 "data_offset": 2048, 00:27:25.878 "data_size": 63488 00:27:25.878 }, 00:27:25.878 { 00:27:25.878 "name": "BaseBdev2", 00:27:25.878 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:25.878 "is_configured": true, 00:27:25.878 "data_offset": 2048, 00:27:25.878 "data_size": 63488 00:27:25.878 }, 00:27:25.878 { 00:27:25.878 "name": "BaseBdev3", 00:27:25.878 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:25.878 "is_configured": true, 00:27:25.878 "data_offset": 2048, 00:27:25.878 "data_size": 63488 00:27:25.878 }, 00:27:25.878 { 00:27:25.878 "name": "BaseBdev4", 00:27:25.878 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:25.879 "is_configured": true, 00:27:25.879 "data_offset": 2048, 00:27:25.879 "data_size": 63488 00:27:25.879 } 00:27:25.879 ] 00:27:25.879 }' 00:27:25.879 00:46:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:25.879 00:46:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:25.879 00:46:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:25.879 00:46:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:25.879 00:46:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:27.256 "name": "raid_bdev1", 00:27:27.256 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:27.256 "strip_size_kb": 64, 00:27:27.256 "state": "online", 00:27:27.256 "raid_level": "raid5f", 00:27:27.256 "superblock": true, 00:27:27.256 "num_base_bdevs": 4, 00:27:27.256 "num_base_bdevs_discovered": 4, 00:27:27.256 "num_base_bdevs_operational": 4, 00:27:27.256 "process": { 00:27:27.256 "type": "rebuild", 00:27:27.256 "target": "spare", 00:27:27.256 "progress": { 00:27:27.256 "blocks": 155520, 00:27:27.256 "percent": 81 00:27:27.256 } 00:27:27.256 }, 00:27:27.256 "base_bdevs_list": [ 00:27:27.256 { 00:27:27.256 "name": "spare", 00:27:27.256 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:27.256 "is_configured": true, 00:27:27.256 "data_offset": 2048, 00:27:27.256 "data_size": 63488 00:27:27.256 }, 00:27:27.256 { 00:27:27.256 "name": "BaseBdev2", 00:27:27.256 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:27.256 "is_configured": true, 00:27:27.256 "data_offset": 2048, 00:27:27.256 "data_size": 63488 00:27:27.256 }, 00:27:27.256 { 00:27:27.256 "name": "BaseBdev3", 00:27:27.256 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:27.256 "is_configured": true, 00:27:27.256 "data_offset": 2048, 00:27:27.256 "data_size": 63488 00:27:27.256 }, 00:27:27.256 { 00:27:27.256 "name": "BaseBdev4", 00:27:27.256 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:27.256 "is_configured": true, 00:27:27.256 "data_offset": 2048, 00:27:27.256 "data_size": 63488 00:27:27.256 } 00:27:27.256 ] 00:27:27.256 }' 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:27.256 00:47:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.634 00:47:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.634 00:47:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:28.634 "name": "raid_bdev1", 00:27:28.634 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:28.634 "strip_size_kb": 64, 00:27:28.634 "state": "online", 00:27:28.634 "raid_level": "raid5f", 00:27:28.634 "superblock": true, 00:27:28.634 "num_base_bdevs": 4, 00:27:28.634 "num_base_bdevs_discovered": 4, 00:27:28.634 "num_base_bdevs_operational": 4, 00:27:28.634 "process": { 00:27:28.634 "type": "rebuild", 00:27:28.634 "target": "spare", 00:27:28.634 "progress": { 00:27:28.634 "blocks": 182400, 00:27:28.634 "percent": 95 00:27:28.634 } 00:27:28.634 }, 00:27:28.634 "base_bdevs_list": [ 00:27:28.634 { 00:27:28.634 "name": "spare", 00:27:28.634 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:28.634 "is_configured": true, 00:27:28.634 "data_offset": 2048, 00:27:28.634 "data_size": 63488 00:27:28.634 }, 00:27:28.634 { 00:27:28.634 "name": "BaseBdev2", 00:27:28.634 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:28.634 "is_configured": true, 00:27:28.634 "data_offset": 2048, 00:27:28.634 "data_size": 63488 00:27:28.634 }, 00:27:28.634 { 00:27:28.634 "name": "BaseBdev3", 00:27:28.634 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:28.634 "is_configured": true, 00:27:28.634 "data_offset": 2048, 00:27:28.634 "data_size": 63488 00:27:28.634 }, 00:27:28.634 { 00:27:28.634 "name": "BaseBdev4", 00:27:28.634 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:28.634 "is_configured": true, 00:27:28.634 "data_offset": 2048, 00:27:28.634 "data_size": 63488 00:27:28.634 } 00:27:28.634 ] 00:27:28.634 }' 00:27:28.634 00:47:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:28.634 00:47:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.634 00:47:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:28.634 00:47:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.634 00:47:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:29.201 [2024-04-27 00:47:02.494246] bdev_raid.c:2747:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:29.201 [2024-04-27 00:47:02.494355] bdev_raid.c:2464:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:29.201 [2024-04-27 00:47:02.494577] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.768 00:47:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:30.026 "name": "raid_bdev1", 00:27:30.026 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:30.026 "strip_size_kb": 64, 00:27:30.026 "state": "online", 00:27:30.026 "raid_level": "raid5f", 00:27:30.026 "superblock": true, 00:27:30.026 "num_base_bdevs": 4, 00:27:30.026 "num_base_bdevs_discovered": 4, 00:27:30.026 "num_base_bdevs_operational": 4, 00:27:30.026 "base_bdevs_list": [ 00:27:30.026 { 00:27:30.026 "name": "spare", 00:27:30.026 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:30.026 "is_configured": true, 00:27:30.026 "data_offset": 2048, 00:27:30.026 "data_size": 63488 00:27:30.026 }, 00:27:30.026 { 00:27:30.026 "name": "BaseBdev2", 00:27:30.026 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:30.026 "is_configured": true, 00:27:30.026 "data_offset": 2048, 00:27:30.026 "data_size": 63488 00:27:30.026 }, 00:27:30.026 { 00:27:30.026 "name": "BaseBdev3", 00:27:30.026 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:30.026 "is_configured": true, 00:27:30.026 "data_offset": 2048, 00:27:30.026 "data_size": 63488 00:27:30.026 }, 00:27:30.026 { 00:27:30.026 "name": "BaseBdev4", 00:27:30.026 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:30.026 "is_configured": true, 00:27:30.026 "data_offset": 2048, 00:27:30.026 "data_size": 63488 00:27:30.026 } 00:27:30.026 ] 00:27:30.026 }' 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@660 -- # break 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.026 00:47:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.285 00:47:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:30.285 "name": "raid_bdev1", 00:27:30.285 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:30.285 "strip_size_kb": 64, 00:27:30.285 "state": "online", 00:27:30.285 "raid_level": "raid5f", 00:27:30.285 "superblock": true, 00:27:30.285 "num_base_bdevs": 4, 00:27:30.285 "num_base_bdevs_discovered": 4, 00:27:30.285 "num_base_bdevs_operational": 4, 00:27:30.285 "base_bdevs_list": [ 00:27:30.285 { 00:27:30.285 "name": "spare", 00:27:30.285 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:30.285 "is_configured": true, 00:27:30.285 "data_offset": 2048, 00:27:30.285 "data_size": 63488 00:27:30.285 }, 00:27:30.285 { 00:27:30.285 "name": "BaseBdev2", 00:27:30.285 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:30.285 "is_configured": true, 00:27:30.285 "data_offset": 2048, 00:27:30.285 "data_size": 63488 00:27:30.285 }, 00:27:30.285 { 00:27:30.285 "name": "BaseBdev3", 00:27:30.285 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:30.285 "is_configured": true, 00:27:30.285 "data_offset": 2048, 00:27:30.285 "data_size": 63488 00:27:30.285 }, 00:27:30.285 { 00:27:30.285 "name": "BaseBdev4", 00:27:30.285 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:30.285 "is_configured": true, 00:27:30.285 "data_offset": 2048, 00:27:30.285 "data_size": 63488 00:27:30.285 } 00:27:30.285 ] 00:27:30.285 }' 00:27:30.285 00:47:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:30.285 00:47:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:30.285 00:47:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:30.543 00:47:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:30.543 00:47:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:30.543 00:47:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:30.544 00:47:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.801 00:47:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:30.801 "name": "raid_bdev1", 00:27:30.801 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:30.801 "strip_size_kb": 64, 00:27:30.801 "state": "online", 00:27:30.801 "raid_level": "raid5f", 00:27:30.801 "superblock": true, 00:27:30.801 "num_base_bdevs": 4, 00:27:30.801 "num_base_bdevs_discovered": 4, 00:27:30.802 "num_base_bdevs_operational": 4, 00:27:30.802 "base_bdevs_list": [ 00:27:30.802 { 00:27:30.802 "name": "spare", 00:27:30.802 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:30.802 "is_configured": true, 00:27:30.802 "data_offset": 2048, 00:27:30.802 "data_size": 63488 00:27:30.802 }, 00:27:30.802 { 00:27:30.802 "name": "BaseBdev2", 00:27:30.802 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:30.802 "is_configured": true, 00:27:30.802 "data_offset": 2048, 00:27:30.802 "data_size": 63488 00:27:30.802 }, 00:27:30.802 { 00:27:30.802 "name": "BaseBdev3", 00:27:30.802 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:30.802 "is_configured": true, 00:27:30.802 "data_offset": 2048, 00:27:30.802 "data_size": 63488 00:27:30.802 }, 00:27:30.802 { 00:27:30.802 "name": "BaseBdev4", 00:27:30.802 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:30.802 "is_configured": true, 00:27:30.802 "data_offset": 2048, 00:27:30.802 "data_size": 63488 00:27:30.802 } 00:27:30.802 ] 00:27:30.802 }' 00:27:30.802 00:47:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:30.802 00:47:04 -- common/autotest_common.sh@10 -- # set +x 00:27:31.368 00:47:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:31.626 [2024-04-27 00:47:05.109380] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:31.626 [2024-04-27 00:47:05.109439] bdev_raid.c:1852:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:31.626 [2024-04-27 00:47:05.109521] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:31.626 [2024-04-27 00:47:05.109653] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:31.626 [2024-04-27 00:47:05.109666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000010e00 name raid_bdev1, state offline 00:27:31.626 00:47:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.626 00:47:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:27:31.885 00:47:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:27:31.885 00:47:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:27:31.885 00:47:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@12 -- # local i 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:31.885 00:47:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:32.144 /dev/nbd0 00:27:32.144 00:47:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:32.144 00:47:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:32.144 00:47:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:27:32.144 00:47:05 -- common/autotest_common.sh@855 -- # local i 00:27:32.144 00:47:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:32.144 00:47:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:32.144 00:47:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:27:32.144 00:47:05 -- common/autotest_common.sh@859 -- # break 00:27:32.144 00:47:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:32.144 00:47:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:32.144 00:47:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:32.144 1+0 records in 00:27:32.144 1+0 records out 00:27:32.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590107 s, 6.9 MB/s 00:27:32.144 00:47:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.144 00:47:05 -- common/autotest_common.sh@872 -- # size=4096 00:27:32.144 00:47:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.144 00:47:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:32.144 00:47:05 -- common/autotest_common.sh@875 -- # return 0 00:27:32.144 00:47:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:32.144 00:47:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:32.144 00:47:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:32.403 /dev/nbd1 00:27:32.403 00:47:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:32.403 00:47:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:32.403 00:47:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:27:32.403 00:47:05 -- common/autotest_common.sh@855 -- # local i 00:27:32.403 00:47:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:27:32.403 00:47:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:27:32.403 00:47:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:27:32.403 00:47:05 -- common/autotest_common.sh@859 -- # break 00:27:32.403 00:47:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:27:32.403 00:47:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:27:32.403 00:47:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:32.403 1+0 records in 00:27:32.403 1+0 records out 00:27:32.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381938 s, 10.7 MB/s 00:27:32.403 00:47:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.662 00:47:05 -- common/autotest_common.sh@872 -- # size=4096 00:27:32.662 00:47:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:32.662 00:47:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:27:32.662 00:47:05 -- common/autotest_common.sh@875 -- # return 0 00:27:32.662 00:47:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:32.662 00:47:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:32.662 00:47:05 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:32.662 00:47:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:32.662 00:47:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:32.662 00:47:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:32.662 00:47:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:32.662 00:47:06 -- bdev/nbd_common.sh@51 -- # local i 00:27:32.662 00:47:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.662 00:47:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@41 -- # break 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@45 -- # return 0 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:32.921 00:47:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@41 -- # break 00:27:33.179 00:47:06 -- bdev/nbd_common.sh@45 -- # return 0 00:27:33.179 00:47:06 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:27:33.179 00:47:06 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:33.179 00:47:06 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:27:33.179 00:47:06 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:33.437 00:47:06 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:33.696 [2024-04-27 00:47:07.252080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:33.696 [2024-04-27 00:47:07.252210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.696 [2024-04-27 00:47:07.252253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:27:33.696 [2024-04-27 00:47:07.252276] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.696 [2024-04-27 00:47:07.254614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.696 [2024-04-27 00:47:07.254692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:33.696 [2024-04-27 00:47:07.254810] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:33.696 [2024-04-27 00:47:07.254875] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:33.696 BaseBdev1 00:27:33.696 00:47:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:33.696 00:47:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:27:33.696 00:47:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:27:33.955 00:47:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:34.214 [2024-04-27 00:47:07.736195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:34.214 [2024-04-27 00:47:07.736324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.214 [2024-04-27 00:47:07.736375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:27:34.214 [2024-04-27 00:47:07.736398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.214 [2024-04-27 00:47:07.736984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.214 [2024-04-27 00:47:07.737035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:34.214 [2024-04-27 00:47:07.737148] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:27:34.214 [2024-04-27 00:47:07.737162] bdev_raid.c:3432:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:27:34.214 [2024-04-27 00:47:07.737170] bdev_raid.c:2316:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:34.214 [2024-04-27 00:47:07.737204] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011180 name raid_bdev1, state configuring 00:27:34.214 [2024-04-27 00:47:07.737278] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:34.214 BaseBdev2 00:27:34.214 00:47:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:34.214 00:47:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:27:34.214 00:47:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:27:34.473 00:47:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:27:34.732 [2024-04-27 00:47:08.148298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:27:34.733 [2024-04-27 00:47:08.148434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:34.733 [2024-04-27 00:47:08.148474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:27:34.733 [2024-04-27 00:47:08.148501] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:34.733 [2024-04-27 00:47:08.149046] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:34.733 [2024-04-27 00:47:08.149099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:34.733 [2024-04-27 00:47:08.149205] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:27:34.733 [2024-04-27 00:47:08.149231] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:34.733 BaseBdev3 00:27:34.733 00:47:08 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:27:34.733 00:47:08 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:27:34.733 00:47:08 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:27:34.991 00:47:08 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:27:35.252 [2024-04-27 00:47:08.620411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:27:35.252 [2024-04-27 00:47:08.620775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.252 [2024-04-27 00:47:08.620931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:27:35.252 [2024-04-27 00:47:08.621058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.252 [2024-04-27 00:47:08.621658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.252 [2024-04-27 00:47:08.621844] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:35.252 [2024-04-27 00:47:08.622067] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:27:35.252 [2024-04-27 00:47:08.622195] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:35.252 BaseBdev4 00:27:35.252 00:47:08 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:35.520 00:47:08 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:35.778 [2024-04-27 00:47:09.108544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:35.778 [2024-04-27 00:47:09.108943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:35.778 [2024-04-27 00:47:09.109177] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:27:35.778 [2024-04-27 00:47:09.109311] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:35.778 [2024-04-27 00:47:09.110028] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:35.778 [2024-04-27 00:47:09.110268] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:35.778 [2024-04-27 00:47:09.110580] bdev_raid.c:3537:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:27:35.778 [2024-04-27 00:47:09.110780] bdev_raid.c:3118:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:35.778 spare 00:27:35.778 00:47:09 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:27:35.778 00:47:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:27:35.778 00:47:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:27:35.778 00:47:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:27:35.778 00:47:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:27:35.778 00:47:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:27:35.778 00:47:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:27:35.779 00:47:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:27:35.779 00:47:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:27:35.779 00:47:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:27:35.779 00:47:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.779 00:47:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.779 [2024-04-27 00:47:09.211063] bdev_raid.c:1701:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000011500 00:27:35.779 [2024-04-27 00:47:09.211285] bdev_raid.c:1702:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:35.779 [2024-04-27 00:47:09.211520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049510 00:27:35.779 [2024-04-27 00:47:09.217226] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000011500 00:27:35.779 [2024-04-27 00:47:09.217384] bdev_raid.c:1732:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000011500 00:27:35.779 [2024-04-27 00:47:09.217697] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.779 00:47:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:27:35.779 "name": "raid_bdev1", 00:27:35.779 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:35.779 "strip_size_kb": 64, 00:27:35.779 "state": "online", 00:27:35.779 "raid_level": "raid5f", 00:27:35.779 "superblock": true, 00:27:35.779 "num_base_bdevs": 4, 00:27:35.779 "num_base_bdevs_discovered": 4, 00:27:35.779 "num_base_bdevs_operational": 4, 00:27:35.779 "base_bdevs_list": [ 00:27:35.779 { 00:27:35.779 "name": "spare", 00:27:35.779 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:35.779 "is_configured": true, 00:27:35.779 "data_offset": 2048, 00:27:35.779 "data_size": 63488 00:27:35.779 }, 00:27:35.779 { 00:27:35.779 "name": "BaseBdev2", 00:27:35.779 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:35.779 "is_configured": true, 00:27:35.779 "data_offset": 2048, 00:27:35.779 "data_size": 63488 00:27:35.779 }, 00:27:35.779 { 00:27:35.779 "name": "BaseBdev3", 00:27:35.779 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:35.779 "is_configured": true, 00:27:35.779 "data_offset": 2048, 00:27:35.779 "data_size": 63488 00:27:35.779 }, 00:27:35.779 { 00:27:35.779 "name": "BaseBdev4", 00:27:35.779 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:35.779 "is_configured": true, 00:27:35.779 "data_offset": 2048, 00:27:35.779 "data_size": 63488 00:27:35.779 } 00:27:35.779 ] 00:27:35.779 }' 00:27:35.779 00:47:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:27:35.779 00:47:09 -- common/autotest_common.sh@10 -- # set +x 00:27:36.715 00:47:09 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:36.715 00:47:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:27:36.715 00:47:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:27:36.715 00:47:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:27:36.715 00:47:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:27:36.715 00:47:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.715 00:47:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.715 00:47:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:27:36.715 "name": "raid_bdev1", 00:27:36.715 "uuid": "8ff8be1e-814f-4780-b2fc-81ccf3cb7580", 00:27:36.715 "strip_size_kb": 64, 00:27:36.715 "state": "online", 00:27:36.715 "raid_level": "raid5f", 00:27:36.715 "superblock": true, 00:27:36.715 "num_base_bdevs": 4, 00:27:36.715 "num_base_bdevs_discovered": 4, 00:27:36.715 "num_base_bdevs_operational": 4, 00:27:36.715 "base_bdevs_list": [ 00:27:36.715 { 00:27:36.715 "name": "spare", 00:27:36.715 "uuid": "097fd8ce-81d6-5102-8f90-065539f557cb", 00:27:36.715 "is_configured": true, 00:27:36.715 "data_offset": 2048, 00:27:36.715 "data_size": 63488 00:27:36.715 }, 00:27:36.715 { 00:27:36.715 "name": "BaseBdev2", 00:27:36.715 "uuid": "3a362146-1040-5661-a28c-28a904a7955b", 00:27:36.715 "is_configured": true, 00:27:36.715 "data_offset": 2048, 00:27:36.715 "data_size": 63488 00:27:36.715 }, 00:27:36.715 { 00:27:36.715 "name": "BaseBdev3", 00:27:36.715 "uuid": "888563d3-bcf8-5822-9b97-0d11d2898b09", 00:27:36.715 "is_configured": true, 00:27:36.715 "data_offset": 2048, 00:27:36.715 "data_size": 63488 00:27:36.715 }, 00:27:36.715 { 00:27:36.715 "name": "BaseBdev4", 00:27:36.715 "uuid": "9684ba62-e90e-5a3a-9b6a-a9612f8652cb", 00:27:36.715 "is_configured": true, 00:27:36.715 "data_offset": 2048, 00:27:36.715 "data_size": 63488 00:27:36.715 } 00:27:36.715 ] 00:27:36.715 }' 00:27:36.715 00:47:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:27:36.972 00:47:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:36.972 00:47:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:27:36.972 00:47:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:27:36.972 00:47:10 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.972 00:47:10 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:37.230 00:47:10 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:27:37.230 00:47:10 -- bdev/bdev_raid.sh@709 -- # killprocess 139584 00:27:37.230 00:47:10 -- common/autotest_common.sh@936 -- # '[' -z 139584 ']' 00:27:37.230 00:47:10 -- common/autotest_common.sh@940 -- # kill -0 139584 00:27:37.230 00:47:10 -- common/autotest_common.sh@941 -- # uname 00:27:37.230 00:47:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:37.230 00:47:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 139584 00:27:37.230 00:47:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:37.230 00:47:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:37.230 00:47:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 139584' 00:27:37.230 killing process with pid 139584 00:27:37.230 00:47:10 -- common/autotest_common.sh@955 -- # kill 139584 00:27:37.230 Received shutdown signal, test time was about 60.000000 seconds 00:27:37.230 00:27:37.230 Latency(us) 00:27:37.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.230 =================================================================================================================== 00:27:37.230 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:37.230 00:47:10 -- common/autotest_common.sh@960 -- # wait 139584 00:27:37.230 [2024-04-27 00:47:10.587909] bdev_raid.c:1364:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:37.230 [2024-04-27 00:47:10.588229] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:37.230 [2024-04-27 00:47:10.588457] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:37.230 [2024-04-27 00:47:10.588569] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000011500 name raid_bdev1, state offline 00:27:37.487 [2024-04-27 00:47:10.963382] bdev_raid.c:1381:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:38.861 00:47:12 -- bdev/bdev_raid.sh@711 -- # return 0 00:27:38.861 00:27:38.861 real 0m30.470s 00:27:38.861 user 0m46.659s 00:27:38.861 sys 0m3.408s 00:27:38.861 ************************************ 00:27:38.861 END TEST raid5f_rebuild_test_sb 00:27:38.861 ************************************ 00:27:38.861 00:47:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:38.861 00:47:12 -- common/autotest_common.sh@10 -- # set +x 00:27:38.861 00:47:12 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:27:38.861 ************************************ 00:27:38.861 END TEST bdev_raid 00:27:38.861 ************************************ 00:27:38.861 00:27:38.861 real 12m31.790s 00:27:38.861 user 20m43.480s 00:27:38.861 sys 1m36.264s 00:27:38.861 00:47:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:38.861 00:47:12 -- common/autotest_common.sh@10 -- # set +x 00:27:38.861 00:47:12 -- spdk/autotest.sh@187 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:27:38.861 00:47:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:38.861 00:47:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.861 00:47:12 -- common/autotest_common.sh@10 -- # set +x 00:27:38.861 ************************************ 00:27:38.861 START TEST bdevperf_config 00:27:38.861 ************************************ 00:27:38.861 00:47:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:27:38.861 * Looking for test storage... 00:27:38.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:27:38.861 00:47:12 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:27:38.861 00:47:12 -- bdevperf/common.sh@8 -- # local job_section=global 00:27:38.861 00:47:12 -- bdevperf/common.sh@9 -- # local rw=read 00:27:38.861 00:47:12 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:38.861 00:47:12 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:38.861 00:47:12 -- bdevperf/common.sh@13 -- # cat 00:27:38.861 00:47:12 -- bdevperf/common.sh@18 -- # job='[global]' 00:27:38.861 00:27:38.861 00:47:12 -- bdevperf/common.sh@19 -- # echo 00:27:38.861 00:47:12 -- bdevperf/common.sh@20 -- # cat 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@18 -- # create_job job0 00:27:38.861 00:47:12 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:38.861 00:47:12 -- bdevperf/common.sh@9 -- # local rw= 00:27:38.861 00:47:12 -- bdevperf/common.sh@10 -- # local filename= 00:27:38.861 00:47:12 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:38.861 00:47:12 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:38.861 00:27:38.861 00:47:12 -- bdevperf/common.sh@19 -- # echo 00:27:38.861 00:47:12 -- bdevperf/common.sh@20 -- # cat 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@19 -- # create_job job1 00:27:38.861 00:47:12 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:38.861 00:47:12 -- bdevperf/common.sh@9 -- # local rw= 00:27:38.861 00:47:12 -- bdevperf/common.sh@10 -- # local filename= 00:27:38.861 00:47:12 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:38.861 00:47:12 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:38.861 00:27:38.861 00:47:12 -- bdevperf/common.sh@19 -- # echo 00:27:38.861 00:47:12 -- bdevperf/common.sh@20 -- # cat 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@20 -- # create_job job2 00:27:38.861 00:47:12 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:38.861 00:47:12 -- bdevperf/common.sh@9 -- # local rw= 00:27:38.861 00:47:12 -- bdevperf/common.sh@10 -- # local filename= 00:27:38.861 00:47:12 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:38.861 00:47:12 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:38.861 00:27:38.861 00:47:12 -- bdevperf/common.sh@19 -- # echo 00:27:38.861 00:47:12 -- bdevperf/common.sh@20 -- # cat 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@21 -- # create_job job3 00:27:38.861 00:47:12 -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:38.861 00:47:12 -- bdevperf/common.sh@9 -- # local rw= 00:27:38.861 00:47:12 -- bdevperf/common.sh@10 -- # local filename= 00:27:38.861 00:47:12 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:38.861 00:27:38.861 00:47:12 -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:38.861 00:47:12 -- bdevperf/common.sh@19 -- # echo 00:27:38.861 00:47:12 -- bdevperf/common.sh@20 -- # cat 00:27:38.861 00:47:12 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:43.045 00:47:16 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-27 00:47:12.370473] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:43.045 [2024-04-27 00:47:12.370659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140364 ] 00:27:43.045 Using job config with 4 jobs 00:27:43.045 [2024-04-27 00:47:12.537547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.045 [2024-04-27 00:47:12.741298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.045 cpumask for '\''job0'\'' is too big 00:27:43.045 cpumask for '\''job1'\'' is too big 00:27:43.045 cpumask for '\''job2'\'' is too big 00:27:43.045 cpumask for '\''job3'\'' is too big 00:27:43.045 Running I/O for 2 seconds... 00:27:43.045 00:27:43.045 Latency(us) 00:27:43.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.01 28621.89 27.95 0.00 0.00 8937.41 1489.45 14358.34 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.01 28603.18 27.93 0.00 0.00 8925.59 1422.43 16562.73 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.02 28582.37 27.91 0.00 0.00 8917.36 1444.77 16681.89 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.02 28661.89 27.99 0.00 0.00 8878.03 685.15 16920.20 00:27:43.045 =================================================================================================================== 00:27:43.045 Total : 114469.33 111.79 0.00 0.00 8914.56 685.15 16920.20' 00:27:43.045 00:47:16 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-27 00:47:12.370473] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:43.045 [2024-04-27 00:47:12.370659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140364 ] 00:27:43.045 Using job config with 4 jobs 00:27:43.045 [2024-04-27 00:47:12.537547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.045 [2024-04-27 00:47:12.741298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.045 cpumask for '\''job0'\'' is too big 00:27:43.045 cpumask for '\''job1'\'' is too big 00:27:43.045 cpumask for '\''job2'\'' is too big 00:27:43.045 cpumask for '\''job3'\'' is too big 00:27:43.045 Running I/O for 2 seconds... 00:27:43.045 00:27:43.045 Latency(us) 00:27:43.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.01 28621.89 27.95 0.00 0.00 8937.41 1489.45 14358.34 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.01 28603.18 27.93 0.00 0.00 8925.59 1422.43 16562.73 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.02 28582.37 27.91 0.00 0.00 8917.36 1444.77 16681.89 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.02 28661.89 27.99 0.00 0.00 8878.03 685.15 16920.20 00:27:43.045 =================================================================================================================== 00:27:43.045 Total : 114469.33 111.79 0.00 0.00 8914.56 685.15 16920.20' 00:27:43.045 00:47:16 -- bdevperf/common.sh@32 -- # echo '[2024-04-27 00:47:12.370473] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:43.045 [2024-04-27 00:47:12.370659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140364 ] 00:27:43.045 Using job config with 4 jobs 00:27:43.045 [2024-04-27 00:47:12.537547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.045 [2024-04-27 00:47:12.741298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.045 cpumask for '\''job0'\'' is too big 00:27:43.045 cpumask for '\''job1'\'' is too big 00:27:43.045 cpumask for '\''job2'\'' is too big 00:27:43.045 cpumask for '\''job3'\'' is too big 00:27:43.045 Running I/O for 2 seconds... 00:27:43.045 00:27:43.045 Latency(us) 00:27:43.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.01 28621.89 27.95 0.00 0.00 8937.41 1489.45 14358.34 00:27:43.045 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.045 Malloc0 : 2.01 28603.18 27.93 0.00 0.00 8925.59 1422.43 16562.73 00:27:43.046 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.046 Malloc0 : 2.02 28582.37 27.91 0.00 0.00 8917.36 1444.77 16681.89 00:27:43.046 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:43.046 Malloc0 : 2.02 28661.89 27.99 0.00 0.00 8878.03 685.15 16920.20 00:27:43.046 =================================================================================================================== 00:27:43.046 Total : 114469.33 111.79 0.00 0.00 8914.56 685.15 16920.20' 00:27:43.046 00:47:16 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:43.046 00:47:16 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:43.046 00:47:16 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:27:43.046 00:47:16 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:43.046 [2024-04-27 00:47:16.493062] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:43.046 [2024-04-27 00:47:16.494136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140418 ] 00:27:43.304 [2024-04-27 00:47:16.660425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.304 [2024-04-27 00:47:16.868993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.881 cpumask for 'job0' is too big 00:27:43.881 cpumask for 'job1' is too big 00:27:43.881 cpumask for 'job2' is too big 00:27:43.881 cpumask for 'job3' is too big 00:27:47.171 00:47:20 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:27:47.171 Running I/O for 2 seconds... 00:27:47.171 00:27:47.171 Latency(us) 00:27:47.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.171 Malloc0 : 2.01 27513.58 26.87 0.00 0.00 9296.72 1608.61 14537.08 00:27:47.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.171 Malloc0 : 2.02 27527.07 26.88 0.00 0.00 9273.76 1511.80 12988.04 00:27:47.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.171 Malloc0 : 2.02 27507.43 26.86 0.00 0.00 9261.60 1526.69 11498.59 00:27:47.171 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:27:47.171 Malloc0 : 2.02 27489.49 26.85 0.00 0.00 9250.09 1765.00 11439.01 00:27:47.171 =================================================================================================================== 00:27:47.171 Total : 110037.58 107.46 0.00 0.00 9270.51 1511.80 14537.08' 00:27:47.171 00:47:20 -- bdevperf/test_config.sh@27 -- # cleanup 00:27:47.171 00:47:20 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:47.171 00:47:20 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:27:47.171 00:47:20 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:47.171 00:47:20 -- bdevperf/common.sh@9 -- # local rw=write 00:27:47.171 00:47:20 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:47.171 00:47:20 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:47.171 00:47:20 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:47.171 00:27:47.171 00:47:20 -- bdevperf/common.sh@19 -- # echo 00:27:47.171 00:47:20 -- bdevperf/common.sh@20 -- # cat 00:27:47.171 00:47:20 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:27:47.171 00:47:20 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:47.171 00:47:20 -- bdevperf/common.sh@9 -- # local rw=write 00:27:47.171 00:47:20 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:47.171 00:47:20 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:47.171 00:47:20 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:47.171 00:27:47.171 00:47:20 -- bdevperf/common.sh@19 -- # echo 00:27:47.171 00:47:20 -- bdevperf/common.sh@20 -- # cat 00:27:47.171 00:47:20 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:27:47.171 00:47:20 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:47.171 00:47:20 -- bdevperf/common.sh@9 -- # local rw=write 00:27:47.171 00:47:20 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:27:47.171 00:47:20 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:47.171 00:27:47.171 00:47:20 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:47.172 00:47:20 -- bdevperf/common.sh@19 -- # echo 00:27:47.172 00:47:20 -- bdevperf/common.sh@20 -- # cat 00:27:47.172 00:47:20 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-27 00:47:20.641859] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:51.358 [2024-04-27 00:47:20.642046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140471 ] 00:27:51.358 Using job config with 3 jobs 00:27:51.358 [2024-04-27 00:47:20.792454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.358 [2024-04-27 00:47:20.998776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.358 cpumask for '\''job0'\'' is too big 00:27:51.358 cpumask for '\''job1'\'' is too big 00:27:51.358 cpumask for '\''job2'\'' is too big 00:27:51.358 Running I/O for 2 seconds... 00:27:51.358 00:27:51.358 Latency(us) 00:27:51.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40041.32 39.10 0.00 0.00 6386.82 1511.80 9592.09 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40050.22 39.11 0.00 0.00 6373.94 1571.37 7983.48 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40022.87 39.08 0.00 0.00 6366.60 1601.16 7923.90 00:27:51.358 =================================================================================================================== 00:27:51.358 Total : 120114.42 117.30 0.00 0.00 6375.77 1511.80 9592.09' 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-27 00:47:20.641859] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:51.358 [2024-04-27 00:47:20.642046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140471 ] 00:27:51.358 Using job config with 3 jobs 00:27:51.358 [2024-04-27 00:47:20.792454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.358 [2024-04-27 00:47:20.998776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.358 cpumask for '\''job0'\'' is too big 00:27:51.358 cpumask for '\''job1'\'' is too big 00:27:51.358 cpumask for '\''job2'\'' is too big 00:27:51.358 Running I/O for 2 seconds... 00:27:51.358 00:27:51.358 Latency(us) 00:27:51.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40041.32 39.10 0.00 0.00 6386.82 1511.80 9592.09 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40050.22 39.11 0.00 0.00 6373.94 1571.37 7983.48 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40022.87 39.08 0.00 0.00 6366.60 1601.16 7923.90 00:27:51.358 =================================================================================================================== 00:27:51.358 Total : 120114.42 117.30 0.00 0.00 6375.77 1511.80 9592.09' 00:27:51.358 00:47:24 -- bdevperf/common.sh@32 -- # echo '[2024-04-27 00:47:20.641859] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:51.358 [2024-04-27 00:47:20.642046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140471 ] 00:27:51.358 Using job config with 3 jobs 00:27:51.358 [2024-04-27 00:47:20.792454] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.358 [2024-04-27 00:47:20.998776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.358 cpumask for '\''job0'\'' is too big 00:27:51.358 cpumask for '\''job1'\'' is too big 00:27:51.358 cpumask for '\''job2'\'' is too big 00:27:51.358 Running I/O for 2 seconds... 00:27:51.358 00:27:51.358 Latency(us) 00:27:51.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40041.32 39.10 0.00 0.00 6386.82 1511.80 9592.09 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40050.22 39.11 0.00 0.00 6373.94 1571.37 7983.48 00:27:51.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:27:51.358 Malloc0 : 2.01 40022.87 39.08 0.00 0.00 6366.60 1601.16 7923.90 00:27:51.358 =================================================================================================================== 00:27:51.358 Total : 120114.42 117.30 0.00 0.00 6375.77 1511.80 9592.09' 00:27:51.358 00:47:24 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:51.358 00:47:24 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@35 -- # cleanup 00:27:51.358 00:47:24 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:27:51.358 00:47:24 -- bdevperf/common.sh@8 -- # local job_section=global 00:27:51.358 00:47:24 -- bdevperf/common.sh@9 -- # local rw=rw 00:27:51.358 00:47:24 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:27:51.358 00:47:24 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:27:51.358 00:47:24 -- bdevperf/common.sh@13 -- # cat 00:27:51.358 00:47:24 -- bdevperf/common.sh@18 -- # job='[global]' 00:27:51.358 00:27:51.358 00:47:24 -- bdevperf/common.sh@19 -- # echo 00:27:51.358 00:47:24 -- bdevperf/common.sh@20 -- # cat 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@38 -- # create_job job0 00:27:51.358 00:47:24 -- bdevperf/common.sh@8 -- # local job_section=job0 00:27:51.358 00:47:24 -- bdevperf/common.sh@9 -- # local rw= 00:27:51.358 00:47:24 -- bdevperf/common.sh@10 -- # local filename= 00:27:51.358 00:47:24 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:27:51.358 00:47:24 -- bdevperf/common.sh@18 -- # job='[job0]' 00:27:51.358 00:27:51.358 00:47:24 -- bdevperf/common.sh@19 -- # echo 00:27:51.358 00:47:24 -- bdevperf/common.sh@20 -- # cat 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@39 -- # create_job job1 00:27:51.358 00:47:24 -- bdevperf/common.sh@8 -- # local job_section=job1 00:27:51.358 00:47:24 -- bdevperf/common.sh@9 -- # local rw= 00:27:51.358 00:47:24 -- bdevperf/common.sh@10 -- # local filename= 00:27:51.358 00:47:24 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:27:51.358 00:47:24 -- bdevperf/common.sh@18 -- # job='[job1]' 00:27:51.358 00:27:51.358 00:47:24 -- bdevperf/common.sh@19 -- # echo 00:27:51.358 00:47:24 -- bdevperf/common.sh@20 -- # cat 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@40 -- # create_job job2 00:27:51.358 00:47:24 -- bdevperf/common.sh@8 -- # local job_section=job2 00:27:51.358 00:47:24 -- bdevperf/common.sh@9 -- # local rw= 00:27:51.358 00:47:24 -- bdevperf/common.sh@10 -- # local filename= 00:27:51.358 00:47:24 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:27:51.358 00:47:24 -- bdevperf/common.sh@18 -- # job='[job2]' 00:27:51.358 00:27:51.358 00:47:24 -- bdevperf/common.sh@19 -- # echo 00:27:51.358 00:47:24 -- bdevperf/common.sh@20 -- # cat 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@41 -- # create_job job3 00:27:51.358 00:47:24 -- bdevperf/common.sh@8 -- # local job_section=job3 00:27:51.358 00:47:24 -- bdevperf/common.sh@9 -- # local rw= 00:27:51.358 00:47:24 -- bdevperf/common.sh@10 -- # local filename= 00:27:51.358 00:47:24 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:27:51.358 00:47:24 -- bdevperf/common.sh@18 -- # job='[job3]' 00:27:51.358 00:27:51.358 00:47:24 -- bdevperf/common.sh@19 -- # echo 00:27:51.358 00:47:24 -- bdevperf/common.sh@20 -- # cat 00:27:51.358 00:47:24 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:55.571 00:47:28 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-27 00:47:24.775449] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:55.571 [2024-04-27 00:47:24.775694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140529 ] 00:27:55.571 Using job config with 4 jobs 00:27:55.571 [2024-04-27 00:47:24.945464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.571 [2024-04-27 00:47:25.168322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.571 cpumask for '\''job0'\'' is too big 00:27:55.571 cpumask for '\''job1'\'' is too big 00:27:55.571 cpumask for '\''job2'\'' is too big 00:27:55.571 cpumask for '\''job3'\'' is too big 00:27:55.571 Running I/O for 2 seconds... 00:27:55.571 00:27:55.571 Latency(us) 00:27:55.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.03 14349.05 14.01 0.00 0.00 17826.51 3008.70 25618.62 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14338.19 14.00 0.00 0.00 17826.11 3634.27 25618.62 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14327.82 13.99 0.00 0.00 17788.97 2934.23 22520.55 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14316.13 13.98 0.00 0.00 17791.73 3574.69 22520.55 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14305.25 13.97 0.00 0.00 17756.40 3098.07 21924.77 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14294.94 13.96 0.00 0.00 17757.80 3500.22 21805.61 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14284.54 13.95 0.00 0.00 17721.26 2993.80 21686.46 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14273.36 13.94 0.00 0.00 17722.03 3574.69 21924.77 00:27:55.571 =================================================================================================================== 00:27:55.571 Total : 114489.28 111.81 0.00 0.00 17773.85 2934.23 25618.62' 00:27:55.571 00:47:28 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-27 00:47:24.775449] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:55.571 [2024-04-27 00:47:24.775694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140529 ] 00:27:55.571 Using job config with 4 jobs 00:27:55.571 [2024-04-27 00:47:24.945464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.571 [2024-04-27 00:47:25.168322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.571 cpumask for '\''job0'\'' is too big 00:27:55.571 cpumask for '\''job1'\'' is too big 00:27:55.571 cpumask for '\''job2'\'' is too big 00:27:55.571 cpumask for '\''job3'\'' is too big 00:27:55.571 Running I/O for 2 seconds... 00:27:55.571 00:27:55.571 Latency(us) 00:27:55.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.03 14349.05 14.01 0.00 0.00 17826.51 3008.70 25618.62 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14338.19 14.00 0.00 0.00 17826.11 3634.27 25618.62 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14327.82 13.99 0.00 0.00 17788.97 2934.23 22520.55 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14316.13 13.98 0.00 0.00 17791.73 3574.69 22520.55 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14305.25 13.97 0.00 0.00 17756.40 3098.07 21924.77 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14294.94 13.96 0.00 0.00 17757.80 3500.22 21805.61 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14284.54 13.95 0.00 0.00 17721.26 2993.80 21686.46 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14273.36 13.94 0.00 0.00 17722.03 3574.69 21924.77 00:27:55.571 =================================================================================================================== 00:27:55.571 Total : 114489.28 111.81 0.00 0.00 17773.85 2934.23 25618.62' 00:27:55.571 00:47:28 -- bdevperf/common.sh@32 -- # echo '[2024-04-27 00:47:24.775449] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:55.571 [2024-04-27 00:47:24.775694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140529 ] 00:27:55.571 Using job config with 4 jobs 00:27:55.571 [2024-04-27 00:47:24.945464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.571 [2024-04-27 00:47:25.168322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.571 cpumask for '\''job0'\'' is too big 00:27:55.571 cpumask for '\''job1'\'' is too big 00:27:55.571 cpumask for '\''job2'\'' is too big 00:27:55.571 cpumask for '\''job3'\'' is too big 00:27:55.571 Running I/O for 2 seconds... 00:27:55.571 00:27:55.571 Latency(us) 00:27:55.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.03 14349.05 14.01 0.00 0.00 17826.51 3008.70 25618.62 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14338.19 14.00 0.00 0.00 17826.11 3634.27 25618.62 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14327.82 13.99 0.00 0.00 17788.97 2934.23 22520.55 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14316.13 13.98 0.00 0.00 17791.73 3574.69 22520.55 00:27:55.571 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc0 : 2.04 14305.25 13.97 0.00 0.00 17756.40 3098.07 21924.77 00:27:55.571 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.571 Malloc1 : 2.04 14294.94 13.96 0.00 0.00 17757.80 3500.22 21805.61 00:27:55.572 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.572 Malloc0 : 2.04 14284.54 13.95 0.00 0.00 17721.26 2993.80 21686.46 00:27:55.572 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:27:55.572 Malloc1 : 2.04 14273.36 13.94 0.00 0.00 17722.03 3574.69 21924.77 00:27:55.572 =================================================================================================================== 00:27:55.572 Total : 114489.28 111.81 0.00 0.00 17773.85 2934.23 25618.62' 00:27:55.572 00:47:28 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:27:55.572 00:47:28 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:27:55.572 00:47:28 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:27:55.572 00:47:28 -- bdevperf/test_config.sh@44 -- # cleanup 00:27:55.572 00:47:28 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:27:55.572 00:47:28 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:55.572 ************************************ 00:27:55.572 END TEST bdevperf_config 00:27:55.572 ************************************ 00:27:55.572 00:27:55.572 real 0m16.693s 00:27:55.572 user 0m14.980s 00:27:55.572 sys 0m1.133s 00:27:55.572 00:47:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:55.572 00:47:28 -- common/autotest_common.sh@10 -- # set +x 00:27:55.572 00:47:28 -- spdk/autotest.sh@188 -- # uname -s 00:27:55.572 00:47:28 -- spdk/autotest.sh@188 -- # [[ Linux == Linux ]] 00:27:55.572 00:47:28 -- spdk/autotest.sh@189 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:55.572 00:47:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:55.572 00:47:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:55.572 00:47:28 -- common/autotest_common.sh@10 -- # set +x 00:27:55.572 ************************************ 00:27:55.572 START TEST reactor_set_interrupt 00:27:55.572 ************************************ 00:27:55.572 00:47:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:55.572 * Looking for test storage... 00:27:55.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.572 00:47:29 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:27:55.572 00:47:29 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:27:55.572 00:47:29 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.572 00:47:29 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.572 00:47:29 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:27:55.572 00:47:29 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:55.572 00:47:29 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:27:55.572 00:47:29 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:27:55.572 00:47:29 -- common/autotest_common.sh@34 -- # set -e 00:27:55.572 00:47:29 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:27:55.572 00:47:29 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:27:55.572 00:47:29 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:27:55.572 00:47:29 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:27:55.572 00:47:29 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:27:55.572 00:47:29 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:27:55.572 00:47:29 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:27:55.572 00:47:29 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:27:55.572 00:47:29 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:27:55.572 00:47:29 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:27:55.572 00:47:29 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:27:55.572 00:47:29 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:27:55.572 00:47:29 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:27:55.572 00:47:29 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:27:55.572 00:47:29 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:27:55.572 00:47:29 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:27:55.572 00:47:29 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:27:55.572 00:47:29 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:27:55.572 00:47:29 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:27:55.572 00:47:29 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:27:55.572 00:47:29 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:27:55.572 00:47:29 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:55.572 00:47:29 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:27:55.572 00:47:29 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:27:55.572 00:47:29 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:27:55.572 00:47:29 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:27:55.572 00:47:29 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:27:55.572 00:47:29 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:27:55.572 00:47:29 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:27:55.572 00:47:29 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:27:55.572 00:47:29 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:27:55.572 00:47:29 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:27:55.572 00:47:29 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:27:55.572 00:47:29 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:27:55.572 00:47:29 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:27:55.572 00:47:29 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:27:55.572 00:47:29 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:27:55.572 00:47:29 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:27:55.572 00:47:29 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:27:55.572 00:47:29 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:27:55.572 00:47:29 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:27:55.572 00:47:29 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:27:55.572 00:47:29 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:27:55.572 00:47:29 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:27:55.572 00:47:29 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:27:55.572 00:47:29 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:27:55.572 00:47:29 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:27:55.572 00:47:29 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:27:55.572 00:47:29 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:27:55.572 00:47:29 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:27:55.572 00:47:29 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:27:55.572 00:47:29 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:27:55.572 00:47:29 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:27:55.572 00:47:29 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:27:55.572 00:47:29 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:27:55.572 00:47:29 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:27:55.572 00:47:29 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:27:55.572 00:47:29 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:27:55.572 00:47:29 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:27:55.572 00:47:29 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:27:55.572 00:47:29 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:27:55.572 00:47:29 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:27:55.572 00:47:29 -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:27:55.572 00:47:29 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=y 00:27:55.572 00:47:29 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:27:55.572 00:47:29 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:27:55.572 00:47:29 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:27:55.572 00:47:29 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:27:55.572 00:47:29 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:27:55.572 00:47:29 -- common/build_config.sh@72 -- # CONFIG_RAID5F=y 00:27:55.572 00:47:29 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:27:55.572 00:47:29 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:27:55.572 00:47:29 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:27:55.572 00:47:29 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:27:55.572 00:47:29 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:27:55.572 00:47:29 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:27:55.572 00:47:29 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:27:55.572 00:47:29 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:27:55.572 00:47:29 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:27:55.572 00:47:29 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:27:55.572 00:47:29 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:55.572 00:47:29 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:27:55.572 00:47:29 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:27:55.572 00:47:29 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:27:55.572 00:47:29 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:27:55.572 00:47:29 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:27:55.572 00:47:29 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:27:55.572 00:47:29 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:27:55.572 00:47:29 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:27:55.572 00:47:29 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:27:55.572 00:47:29 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:27:55.572 00:47:29 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:27:55.572 00:47:29 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:27:55.572 00:47:29 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:27:55.572 00:47:29 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:27:55.572 00:47:29 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:27:55.572 #define SPDK_CONFIG_H 00:27:55.572 #define SPDK_CONFIG_APPS 1 00:27:55.573 #define SPDK_CONFIG_ARCH native 00:27:55.573 #define SPDK_CONFIG_ASAN 1 00:27:55.573 #undef SPDK_CONFIG_AVAHI 00:27:55.573 #undef SPDK_CONFIG_CET 00:27:55.573 #define SPDK_CONFIG_COVERAGE 1 00:27:55.573 #define SPDK_CONFIG_CROSS_PREFIX 00:27:55.573 #undef SPDK_CONFIG_CRYPTO 00:27:55.573 #undef SPDK_CONFIG_CRYPTO_MLX5 00:27:55.573 #undef SPDK_CONFIG_CUSTOMOCF 00:27:55.573 #undef SPDK_CONFIG_DAOS 00:27:55.573 #define SPDK_CONFIG_DAOS_DIR 00:27:55.573 #define SPDK_CONFIG_DEBUG 1 00:27:55.573 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:27:55.573 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:27:55.573 #define SPDK_CONFIG_DPDK_INC_DIR 00:27:55.573 #define SPDK_CONFIG_DPDK_LIB_DIR 00:27:55.573 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:27:55.573 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:27:55.573 #define SPDK_CONFIG_EXAMPLES 1 00:27:55.573 #undef SPDK_CONFIG_FC 00:27:55.573 #define SPDK_CONFIG_FC_PATH 00:27:55.573 #define SPDK_CONFIG_FIO_PLUGIN 1 00:27:55.573 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:27:55.573 #undef SPDK_CONFIG_FUSE 00:27:55.573 #undef SPDK_CONFIG_FUZZER 00:27:55.573 #define SPDK_CONFIG_FUZZER_LIB 00:27:55.573 #undef SPDK_CONFIG_GOLANG 00:27:55.573 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:27:55.573 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:27:55.573 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:27:55.573 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:27:55.573 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:27:55.573 #undef SPDK_CONFIG_HAVE_LIBBSD 00:27:55.573 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:27:55.573 #define SPDK_CONFIG_IDXD 1 00:27:55.573 #undef SPDK_CONFIG_IDXD_KERNEL 00:27:55.573 #undef SPDK_CONFIG_IPSEC_MB 00:27:55.573 #define SPDK_CONFIG_IPSEC_MB_DIR 00:27:55.573 #define SPDK_CONFIG_ISAL 1 00:27:55.573 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:27:55.573 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:27:55.573 #define SPDK_CONFIG_LIBDIR 00:27:55.573 #undef SPDK_CONFIG_LTO 00:27:55.573 #define SPDK_CONFIG_MAX_LCORES 00:27:55.573 #define SPDK_CONFIG_NVME_CUSE 1 00:27:55.573 #undef SPDK_CONFIG_OCF 00:27:55.573 #define SPDK_CONFIG_OCF_PATH 00:27:55.573 #define SPDK_CONFIG_OPENSSL_PATH 00:27:55.573 #undef SPDK_CONFIG_PGO_CAPTURE 00:27:55.573 #define SPDK_CONFIG_PGO_DIR 00:27:55.573 #undef SPDK_CONFIG_PGO_USE 00:27:55.573 #define SPDK_CONFIG_PREFIX /usr/local 00:27:55.573 #define SPDK_CONFIG_RAID5F 1 00:27:55.573 #undef SPDK_CONFIG_RBD 00:27:55.573 #define SPDK_CONFIG_RDMA 1 00:27:55.573 #define SPDK_CONFIG_RDMA_PROV verbs 00:27:55.573 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:27:55.573 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:27:55.573 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:27:55.573 #undef SPDK_CONFIG_SHARED 00:27:55.573 #undef SPDK_CONFIG_SMA 00:27:55.573 #define SPDK_CONFIG_TESTS 1 00:27:55.573 #undef SPDK_CONFIG_TSAN 00:27:55.573 #undef SPDK_CONFIG_UBLK 00:27:55.573 #define SPDK_CONFIG_UBSAN 1 00:27:55.573 #define SPDK_CONFIG_UNIT_TESTS 1 00:27:55.573 #undef SPDK_CONFIG_URING 00:27:55.573 #define SPDK_CONFIG_URING_PATH 00:27:55.573 #undef SPDK_CONFIG_URING_ZNS 00:27:55.573 #undef SPDK_CONFIG_USDT 00:27:55.573 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:27:55.573 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:27:55.573 #undef SPDK_CONFIG_VFIO_USER 00:27:55.573 #define SPDK_CONFIG_VFIO_USER_DIR 00:27:55.573 #define SPDK_CONFIG_VHOST 1 00:27:55.573 #define SPDK_CONFIG_VIRTIO 1 00:27:55.573 #undef SPDK_CONFIG_VTUNE 00:27:55.573 #define SPDK_CONFIG_VTUNE_DIR 00:27:55.573 #define SPDK_CONFIG_WERROR 1 00:27:55.573 #define SPDK_CONFIG_WPDK_DIR 00:27:55.573 #undef SPDK_CONFIG_XNVME 00:27:55.573 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:27:55.573 00:47:29 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:27:55.573 00:47:29 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:55.573 00:47:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.573 00:47:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.573 00:47:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.573 00:47:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:55.573 00:47:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:55.573 00:47:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:55.573 00:47:29 -- paths/export.sh@5 -- # export PATH 00:27:55.573 00:47:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:55.573 00:47:29 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:55.573 00:47:29 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:27:55.573 00:47:29 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:55.573 00:47:29 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:27:55.573 00:47:29 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:27:55.573 00:47:29 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:27:55.573 00:47:29 -- pm/common@67 -- # TEST_TAG=N/A 00:27:55.573 00:47:29 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:27:55.573 00:47:29 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:27:55.573 00:47:29 -- pm/common@71 -- # uname -s 00:27:55.573 00:47:29 -- pm/common@71 -- # PM_OS=Linux 00:27:55.573 00:47:29 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:27:55.573 00:47:29 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:27:55.573 00:47:29 -- pm/common@76 -- # [[ Linux == Linux ]] 00:27:55.573 00:47:29 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:27:55.573 00:47:29 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:27:55.573 00:47:29 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:27:55.573 00:47:29 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:27:55.573 00:47:29 -- common/autotest_common.sh@57 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:27:55.573 00:47:29 -- common/autotest_common.sh@61 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:27:55.573 00:47:29 -- common/autotest_common.sh@63 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:27:55.573 00:47:29 -- common/autotest_common.sh@65 -- # : 1 00:27:55.573 00:47:29 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:27:55.573 00:47:29 -- common/autotest_common.sh@67 -- # : 1 00:27:55.573 00:47:29 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:27:55.573 00:47:29 -- common/autotest_common.sh@69 -- # : 00:27:55.573 00:47:29 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:27:55.573 00:47:29 -- common/autotest_common.sh@71 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:27:55.573 00:47:29 -- common/autotest_common.sh@73 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:27:55.573 00:47:29 -- common/autotest_common.sh@75 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:27:55.573 00:47:29 -- common/autotest_common.sh@77 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:27:55.573 00:47:29 -- common/autotest_common.sh@79 -- # : 1 00:27:55.573 00:47:29 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:27:55.573 00:47:29 -- common/autotest_common.sh@81 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:27:55.573 00:47:29 -- common/autotest_common.sh@83 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:27:55.573 00:47:29 -- common/autotest_common.sh@85 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:27:55.573 00:47:29 -- common/autotest_common.sh@87 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:27:55.573 00:47:29 -- common/autotest_common.sh@89 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:27:55.573 00:47:29 -- common/autotest_common.sh@91 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:27:55.573 00:47:29 -- common/autotest_common.sh@93 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:27:55.573 00:47:29 -- common/autotest_common.sh@95 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:27:55.573 00:47:29 -- common/autotest_common.sh@97 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:27:55.573 00:47:29 -- common/autotest_common.sh@99 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:27:55.573 00:47:29 -- common/autotest_common.sh@101 -- # : rdma 00:27:55.573 00:47:29 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:27:55.573 00:47:29 -- common/autotest_common.sh@103 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:27:55.573 00:47:29 -- common/autotest_common.sh@105 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:27:55.573 00:47:29 -- common/autotest_common.sh@107 -- # : 1 00:27:55.573 00:47:29 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:27:55.573 00:47:29 -- common/autotest_common.sh@109 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:27:55.573 00:47:29 -- common/autotest_common.sh@111 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:27:55.573 00:47:29 -- common/autotest_common.sh@113 -- # : 0 00:27:55.573 00:47:29 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:27:55.574 00:47:29 -- common/autotest_common.sh@115 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:27:55.574 00:47:29 -- common/autotest_common.sh@117 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:27:55.574 00:47:29 -- common/autotest_common.sh@119 -- # : 1 00:27:55.574 00:47:29 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:27:55.574 00:47:29 -- common/autotest_common.sh@121 -- # : 1 00:27:55.574 00:47:29 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:27:55.574 00:47:29 -- common/autotest_common.sh@123 -- # : 00:27:55.574 00:47:29 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:27:55.574 00:47:29 -- common/autotest_common.sh@125 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:27:55.574 00:47:29 -- common/autotest_common.sh@127 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:27:55.574 00:47:29 -- common/autotest_common.sh@129 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:27:55.574 00:47:29 -- common/autotest_common.sh@131 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:27:55.574 00:47:29 -- common/autotest_common.sh@133 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:27:55.574 00:47:29 -- common/autotest_common.sh@135 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:27:55.574 00:47:29 -- common/autotest_common.sh@137 -- # : 00:27:55.574 00:47:29 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:27:55.574 00:47:29 -- common/autotest_common.sh@139 -- # : true 00:27:55.574 00:47:29 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:27:55.574 00:47:29 -- common/autotest_common.sh@141 -- # : 1 00:27:55.574 00:47:29 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:27:55.574 00:47:29 -- common/autotest_common.sh@143 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:27:55.574 00:47:29 -- common/autotest_common.sh@145 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:27:55.574 00:47:29 -- common/autotest_common.sh@147 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:27:55.574 00:47:29 -- common/autotest_common.sh@149 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:27:55.574 00:47:29 -- common/autotest_common.sh@151 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:27:55.574 00:47:29 -- common/autotest_common.sh@153 -- # : 00:27:55.574 00:47:29 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:27:55.574 00:47:29 -- common/autotest_common.sh@155 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:27:55.574 00:47:29 -- common/autotest_common.sh@157 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:27:55.574 00:47:29 -- common/autotest_common.sh@159 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:27:55.574 00:47:29 -- common/autotest_common.sh@161 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:27:55.574 00:47:29 -- common/autotest_common.sh@163 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:27:55.574 00:47:29 -- common/autotest_common.sh@166 -- # : 00:27:55.574 00:47:29 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:27:55.574 00:47:29 -- common/autotest_common.sh@168 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:27:55.574 00:47:29 -- common/autotest_common.sh@170 -- # : 0 00:27:55.574 00:47:29 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:27:55.574 00:47:29 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:27:55.574 00:47:29 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:27:55.574 00:47:29 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:27:55.574 00:47:29 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:55.574 00:47:29 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:27:55.574 00:47:29 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:27:55.574 00:47:29 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:27:55.574 00:47:29 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:55.574 00:47:29 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:27:55.574 00:47:29 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:55.574 00:47:29 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:27:55.574 00:47:29 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:27:55.574 00:47:29 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:27:55.574 00:47:29 -- common/autotest_common.sh@199 -- # cat 00:27:55.574 00:47:29 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:27:55.574 00:47:29 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:55.574 00:47:29 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:27:55.574 00:47:29 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:55.574 00:47:29 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:27:55.574 00:47:29 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:27:55.574 00:47:29 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:27:55.574 00:47:29 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:55.574 00:47:29 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:27:55.574 00:47:29 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:55.574 00:47:29 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:27:55.574 00:47:29 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:27:55.574 00:47:29 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:27:55.574 00:47:29 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:55.574 00:47:29 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:27:55.574 00:47:29 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:55.574 00:47:29 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:27:55.574 00:47:29 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:55.574 00:47:29 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:55.574 00:47:29 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:27:55.574 00:47:29 -- common/autotest_common.sh@252 -- # export valgrind= 00:27:55.574 00:47:29 -- common/autotest_common.sh@252 -- # valgrind= 00:27:55.574 00:47:29 -- common/autotest_common.sh@258 -- # uname -s 00:27:55.574 00:47:29 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:27:55.574 00:47:29 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:27:55.574 00:47:29 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:27:55.574 00:47:29 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:27:55.574 00:47:29 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:27:55.574 00:47:29 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:27:55.574 00:47:29 -- common/autotest_common.sh@268 -- # MAKE=make 00:27:55.574 00:47:29 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:27:55.574 00:47:29 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:27:55.574 00:47:29 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:27:55.574 00:47:29 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:27:55.574 00:47:29 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:27:55.574 00:47:29 -- common/autotest_common.sh@307 -- # [[ -z 140630 ]] 00:27:55.574 00:47:29 -- common/autotest_common.sh@307 -- # kill -0 140630 00:27:55.574 00:47:29 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:27:55.574 00:47:29 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:27:55.574 00:47:29 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:27:55.574 00:47:29 -- common/autotest_common.sh@320 -- # local mount target_dir 00:27:55.574 00:47:29 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:27:55.574 00:47:29 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:27:55.574 00:47:29 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:27:55.574 00:47:29 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:27:55.574 00:47:29 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.DNHK1n 00:27:55.574 00:47:29 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:27:55.574 00:47:29 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:27:55.574 00:47:29 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:27:55.574 00:47:29 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.DNHK1n/tests/interrupt /tmp/spdk.DNHK1n 00:27:55.574 00:47:29 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:27:55.575 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.575 00:47:29 -- common/autotest_common.sh@316 -- # df -T 00:27:55.575 00:47:29 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:27:55.833 00:47:29 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:27:55.833 00:47:29 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:27:55.833 00:47:29 -- common/autotest_common.sh@351 -- # avails["$mount"]=1248956416 00:27:55.833 00:47:29 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253683200 00:27:55.834 00:47:29 -- common/autotest_common.sh@352 -- # uses["$mount"]=4726784 00:27:55.834 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # avails["$mount"]=10271670272 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:27:55.834 00:47:29 -- common/autotest_common.sh@352 -- # uses["$mount"]=10328346624 00:27:55.834 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # avails["$mount"]=6265786368 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6268399616 00:27:55.834 00:47:29 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:27:55.834 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:27:55.834 00:47:29 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:27:55.834 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # avails["$mount"]=103061504 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109395968 00:27:55.834 00:47:29 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:27:55.834 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253675008 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253679104 00:27:55.834 00:47:29 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:27:55.834 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:27:55.834 00:47:29 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # avails["$mount"]=97097703424 00:27:55.834 00:47:29 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:27:55.834 00:47:29 -- common/autotest_common.sh@352 -- # uses["$mount"]=2605076480 00:27:55.834 00:47:29 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:27:55.834 00:47:29 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:27:55.834 * Looking for test storage... 00:27:55.834 00:47:29 -- common/autotest_common.sh@357 -- # local target_space new_size 00:27:55.834 00:47:29 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:27:55.834 00:47:29 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:27:55.834 00:47:29 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.834 00:47:29 -- common/autotest_common.sh@361 -- # mount=/ 00:27:55.834 00:47:29 -- common/autotest_common.sh@363 -- # target_space=10271670272 00:27:55.834 00:47:29 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:27:55.834 00:47:29 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:27:55.834 00:47:29 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:27:55.834 00:47:29 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:27:55.834 00:47:29 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:27:55.834 00:47:29 -- common/autotest_common.sh@370 -- # new_size=12542939136 00:27:55.834 00:47:29 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:27:55.834 00:47:29 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.834 00:47:29 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.834 00:47:29 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:27:55.834 00:47:29 -- common/autotest_common.sh@378 -- # return 0 00:27:55.834 00:47:29 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:27:55.834 00:47:29 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:27:55.834 00:47:29 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:27:55.834 00:47:29 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:27:55.834 00:47:29 -- common/autotest_common.sh@1673 -- # true 00:27:55.834 00:47:29 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:27:55.834 00:47:29 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:27:55.834 00:47:29 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:27:55.834 00:47:29 -- common/autotest_common.sh@27 -- # exec 00:27:55.834 00:47:29 -- common/autotest_common.sh@29 -- # exec 00:27:55.834 00:47:29 -- common/autotest_common.sh@31 -- # xtrace_restore 00:27:55.834 00:47:29 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:27:55.834 00:47:29 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:27:55.834 00:47:29 -- common/autotest_common.sh@18 -- # set -x 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:27:55.834 00:47:29 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:55.834 00:47:29 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:27:55.834 00:47:29 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=140674 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 140674 /var/tmp/spdk.sock 00:27:55.834 00:47:29 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:27:55.834 00:47:29 -- common/autotest_common.sh@817 -- # '[' -z 140674 ']' 00:27:55.834 00:47:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.834 00:47:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:55.834 00:47:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.834 00:47:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:55.834 00:47:29 -- common/autotest_common.sh@10 -- # set +x 00:27:55.834 [2024-04-27 00:47:29.234123] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:27:55.834 [2024-04-27 00:47:29.234339] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140674 ] 00:27:55.834 [2024-04-27 00:47:29.413402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.093 [2024-04-27 00:47:29.616790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.093 [2024-04-27 00:47:29.616933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.093 [2024-04-27 00:47:29.616929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.352 [2024-04-27 00:47:29.877377] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:56.639 00:47:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:56.639 00:47:30 -- common/autotest_common.sh@850 -- # return 0 00:27:56.639 00:47:30 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:27:56.639 00:47:30 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:57.210 Malloc0 00:27:57.210 Malloc1 00:27:57.210 Malloc2 00:27:57.210 00:47:30 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:27:57.210 00:47:30 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:27:57.210 00:47:30 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:27:57.210 00:47:30 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:27:57.210 5000+0 records in 00:27:57.210 5000+0 records out 00:27:57.210 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0230274 s, 445 MB/s 00:27:57.210 00:47:30 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:27:57.468 AIO0 00:27:57.468 00:47:30 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 140674 00:27:57.468 00:47:30 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 140674 without_thd 00:27:57.468 00:47:30 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=140674 00:27:57.468 00:47:30 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:27:57.468 00:47:30 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:27:57.468 00:47:30 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:27:57.468 00:47:30 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:27:57.468 00:47:30 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:57.468 00:47:30 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:27:57.468 00:47:30 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:57.468 00:47:30 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:57.468 00:47:30 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:57.726 00:47:31 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:27:57.726 00:47:31 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:27:57.726 00:47:31 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:27:57.726 00:47:31 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:27:57.726 00:47:31 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:27:57.726 00:47:31 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:27:57.726 00:47:31 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:57.726 00:47:31 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:27:57.726 00:47:31 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:27:57.983 00:47:31 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:27:57.983 00:47:31 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:27:57.983 spdk_thread ids are 1 on reactor0. 00:27:57.983 00:47:31 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:57.983 00:47:31 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 140674 0 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140674 0 idle 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@33 -- # local pid=140674 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140674 -w 256 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140674 root 20 0 20.1t 146564 29168 S 6.7 1.2 0:00.73 reactor_0' 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@48 -- # echo 140674 root 20 0 20.1t 146564 29168 S 6.7 1.2 0:00.73 reactor_0 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:57.983 00:47:31 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:57.983 00:47:31 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 140674 1 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140674 1 idle 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@33 -- # local pid=140674 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:57.983 00:47:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:57.984 00:47:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:57.984 00:47:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:57.984 00:47:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:57.984 00:47:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:57.984 00:47:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140674 -w 256 00:27:57.984 00:47:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140687 root 20 0 20.1t 146564 29168 S 0.0 1.2 0:00.00 reactor_1' 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@48 -- # echo 140687 root 20 0 20.1t 146564 29168 S 0.0 1.2 0:00.00 reactor_1 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:58.241 00:47:31 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:27:58.241 00:47:31 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 140674 2 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140674 2 idle 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@33 -- # local pid=140674 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140674 -w 256 00:27:58.241 00:47:31 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140688 root 20 0 20.1t 146564 29168 S 0.0 1.2 0:00.00 reactor_2' 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@48 -- # echo 140688 root 20 0 20.1t 146564 29168 S 0.0 1.2 0:00.00 reactor_2 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:27:58.499 00:47:31 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:58.499 00:47:31 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:27:58.499 00:47:31 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:27:58.499 00:47:31 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:27:58.757 [2024-04-27 00:47:32.124064] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:58.757 00:47:32 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:27:58.757 [2024-04-27 00:47:32.323820] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:27:58.757 [2024-04-27 00:47:32.324613] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:58.757 00:47:32 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:27:59.015 [2024-04-27 00:47:32.591677] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:27:59.015 [2024-04-27 00:47:32.592315] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:59.273 00:47:32 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:59.273 00:47:32 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 140674 0 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 140674 0 busy 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@33 -- # local pid=140674 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140674 -w 256 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140674 root 20 0 20.1t 146676 29168 R 99.9 1.2 0:01.17 reactor_0' 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@48 -- # echo 140674 root 20 0 20.1t 146676 29168 R 99.9 1.2 0:01.17 reactor_0 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:59.273 00:47:32 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:27:59.273 00:47:32 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 140674 2 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 140674 2 busy 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@33 -- # local pid=140674 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140674 -w 256 00:27:59.273 00:47:32 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140688 root 20 0 20.1t 146676 29168 R 93.3 1.2 0:00.33 reactor_2' 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@48 -- # echo 140688 root 20 0 20.1t 146676 29168 R 93.3 1.2 0:00.33 reactor_2 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.3 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:27:59.531 00:47:32 -- interrupt/interrupt_common.sh@56 -- # return 0 00:27:59.531 00:47:32 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:27:59.789 [2024-04-27 00:47:33.195714] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:27:59.789 [2024-04-27 00:47:33.196292] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:27:59.789 00:47:33 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:27:59.789 00:47:33 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 140674 2 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140674 2 idle 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@33 -- # local pid=140674 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140674 -w 256 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140688 root 20 0 20.1t 146744 29168 S 0.0 1.2 0:00.59 reactor_2' 00:27:59.790 00:47:33 -- interrupt/interrupt_common.sh@48 -- # echo 140688 root 20 0 20.1t 146744 29168 S 0.0 1.2 0:00.59 reactor_2 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:00.048 00:47:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:00.048 00:47:33 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:00.048 [2024-04-27 00:47:33.575713] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:00.048 [2024-04-27 00:47:33.576329] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:00.048 00:47:33 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:28:00.048 00:47:33 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:28:00.048 00:47:33 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:28:00.306 [2024-04-27 00:47:33.780032] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.306 00:47:33 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 140674 0 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140674 0 idle 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@33 -- # local pid=140674 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:00.306 00:47:33 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140674 -w 256 00:28:00.576 00:47:33 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140674 root 20 0 20.1t 146836 29168 S 0.0 1.2 0:01.99 reactor_0' 00:28:00.576 00:47:33 -- interrupt/interrupt_common.sh@48 -- # echo 140674 root 20 0 20.1t 146836 29168 S 0.0 1.2 0:01.99 reactor_0 00:28:00.576 00:47:33 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:00.576 00:47:33 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:00.576 00:47:33 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:00.576 00:47:33 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:00.576 00:47:33 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:00.577 00:47:33 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:00.577 00:47:33 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:00.577 00:47:33 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:00.577 00:47:33 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:00.577 00:47:33 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:28:00.577 00:47:33 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:28:00.577 00:47:33 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 140674 00:28:00.577 00:47:33 -- common/autotest_common.sh@936 -- # '[' -z 140674 ']' 00:28:00.577 00:47:33 -- common/autotest_common.sh@940 -- # kill -0 140674 00:28:00.577 00:47:33 -- common/autotest_common.sh@941 -- # uname 00:28:00.577 00:47:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:00.577 00:47:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140674 00:28:00.577 00:47:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:00.577 killing process with pid 140674 00:28:00.577 00:47:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:00.577 00:47:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140674' 00:28:00.577 00:47:33 -- common/autotest_common.sh@955 -- # kill 140674 00:28:00.577 00:47:33 -- common/autotest_common.sh@960 -- # wait 140674 00:28:01.964 00:47:35 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:28:01.964 00:47:35 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:28:01.964 00:47:35 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:28:01.964 00:47:35 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.964 00:47:35 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:28:01.964 00:47:35 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=140821 00:28:01.964 00:47:35 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:01.964 00:47:35 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 140821 /var/tmp/spdk.sock 00:28:01.964 00:47:35 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:01.964 00:47:35 -- common/autotest_common.sh@817 -- # '[' -z 140821 ']' 00:28:01.964 00:47:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.964 00:47:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:01.964 00:47:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.964 00:47:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:01.964 00:47:35 -- common/autotest_common.sh@10 -- # set +x 00:28:01.964 [2024-04-27 00:47:35.248880] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:01.964 [2024-04-27 00:47:35.249082] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140821 ] 00:28:01.964 [2024-04-27 00:47:35.425624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.224 [2024-04-27 00:47:35.624631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.224 [2024-04-27 00:47:35.624861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.224 [2024-04-27 00:47:35.624853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.481 [2024-04-27 00:47:35.882580] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:02.740 00:47:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:02.740 00:47:36 -- common/autotest_common.sh@850 -- # return 0 00:28:02.740 00:47:36 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:28:02.740 00:47:36 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:02.997 Malloc0 00:28:02.997 Malloc1 00:28:02.997 Malloc2 00:28:02.997 00:47:36 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:28:02.997 00:47:36 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:28:02.997 00:47:36 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:02.997 00:47:36 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:02.997 5000+0 records in 00:28:02.997 5000+0 records out 00:28:02.997 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0274084 s, 374 MB/s 00:28:02.997 00:47:36 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:28:03.255 AIO0 00:28:03.255 00:47:36 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 140821 00:28:03.255 00:47:36 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 140821 00:28:03.255 00:47:36 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=140821 00:28:03.255 00:47:36 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:28:03.255 00:47:36 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:03.255 00:47:36 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:03.255 00:47:36 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:28:03.255 00:47:36 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:28:03.255 00:47:36 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:28:03.255 00:47:36 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:03.255 00:47:36 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:28:03.255 00:47:36 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:03.512 00:47:36 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:28:03.512 00:47:36 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:03.512 00:47:37 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:03.512 00:47:37 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:28:03.512 00:47:37 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:28:03.512 00:47:37 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:28:03.512 00:47:37 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:03.512 00:47:37 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:28:03.512 00:47:37 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:28:03.771 00:47:37 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:03.771 00:47:37 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:03.771 spdk_thread ids are 1 on reactor0. 00:28:03.771 00:47:37 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:03.771 00:47:37 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 140821 0 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140821 0 idle 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@33 -- # local pid=140821 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140821 -w 256 00:28:03.771 00:47:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140821 root 20 0 20.1t 146980 29548 S 0.0 1.2 0:00.70 reactor_0' 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # echo 140821 root 20 0 20.1t 146980 29548 S 0.0 1.2 0:00.70 reactor_0 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:04.029 00:47:37 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:04.029 00:47:37 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 140821 1 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140821 1 idle 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@33 -- # local pid=140821 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140821 -w 256 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140830 root 20 0 20.1t 146980 29548 S 0.0 1.2 0:00.00 reactor_1' 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # echo 140830 root 20 0 20.1t 146980 29548 S 0.0 1.2 0:00.00 reactor_1 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:04.029 00:47:37 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:04.029 00:47:37 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 140821 2 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140821 2 idle 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@33 -- # local pid=140821 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:04.029 00:47:37 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140821 -w 256 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140832 root 20 0 20.1t 146980 29548 S 0.0 1.2 0:00.00 reactor_2' 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@48 -- # echo 140832 root 20 0 20.1t 146980 29548 S 0.0 1.2 0:00.00 reactor_2 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:04.287 00:47:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:04.287 00:47:37 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:28:04.287 00:47:37 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:04.544 [2024-04-27 00:47:37.946763] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:04.544 [2024-04-27 00:47:37.947140] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:28:04.544 [2024-04-27 00:47:37.947527] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:04.544 00:47:37 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:04.802 [2024-04-27 00:47:38.214602] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:04.802 [2024-04-27 00:47:38.215230] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:04.802 00:47:38 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:04.802 00:47:38 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 140821 0 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 140821 0 busy 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@33 -- # local pid=140821 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140821 -w 256 00:28:04.802 00:47:38 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140821 root 20 0 20.1t 147064 29548 R 99.9 1.2 0:01.16 reactor_0' 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # echo 140821 root 20 0 20.1t 147064 29548 R 99.9 1.2 0:01.16 reactor_0 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:05.060 00:47:38 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:05.060 00:47:38 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 140821 2 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 140821 2 busy 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@33 -- # local pid=140821 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140821 -w 256 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140832 root 20 0 20.1t 147064 29548 R 99.9 1.2 0:00.34 reactor_2' 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # echo 140832 root 20 0 20.1t 147064 29548 R 99.9 1.2 0:00.34 reactor_2 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:28:05.060 00:47:38 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:05.060 00:47:38 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:05.318 [2024-04-27 00:47:38.835096] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:05.318 [2024-04-27 00:47:38.835507] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:05.318 00:47:38 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:28:05.318 00:47:38 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 140821 2 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140821 2 idle 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@33 -- # local pid=140821 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140821 -w 256 00:28:05.318 00:47:38 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140832 root 20 0 20.1t 147124 29548 S 0.0 1.2 0:00.62 reactor_2' 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@48 -- # echo 140832 root 20 0 20.1t 147124 29548 S 0.0 1.2 0:00.62 reactor_2 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:05.577 00:47:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:05.577 00:47:39 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:05.835 [2024-04-27 00:47:39.219173] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:05.835 [2024-04-27 00:47:39.219765] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:28:05.835 [2024-04-27 00:47:39.219813] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:05.835 00:47:39 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:28:05.835 00:47:39 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 140821 0 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 140821 0 idle 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@33 -- # local pid=140821 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@41 -- # hash top 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 140821 -w 256 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 140821 root 20 0 20.1t 147168 29548 S 0.0 1.2 0:01.99 reactor_0' 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@48 -- # echo 140821 root 20 0 20.1t 147168 29548 S 0.0 1.2 0:01.99 reactor_0 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:28:05.835 00:47:39 -- interrupt/interrupt_common.sh@56 -- # return 0 00:28:05.835 00:47:39 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:05.835 00:47:39 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:28:05.835 00:47:39 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:05.835 00:47:39 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 140821 00:28:05.835 00:47:39 -- common/autotest_common.sh@936 -- # '[' -z 140821 ']' 00:28:05.835 00:47:39 -- common/autotest_common.sh@940 -- # kill -0 140821 00:28:05.835 00:47:39 -- common/autotest_common.sh@941 -- # uname 00:28:05.835 00:47:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:05.835 00:47:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 140821 00:28:06.093 00:47:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:06.093 00:47:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:06.093 killing process with pid 140821 00:28:06.093 00:47:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 140821' 00:28:06.093 00:47:39 -- common/autotest_common.sh@955 -- # kill 140821 00:28:06.093 00:47:39 -- common/autotest_common.sh@960 -- # wait 140821 00:28:07.472 00:47:40 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:28:07.472 00:47:40 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:28:07.472 00:28:07.472 real 0m11.659s 00:28:07.472 user 0m12.184s 00:28:07.472 sys 0m1.571s 00:28:07.472 00:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:07.472 ************************************ 00:28:07.472 END TEST reactor_set_interrupt 00:28:07.472 ************************************ 00:28:07.472 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.472 00:47:40 -- spdk/autotest.sh@190 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:07.472 00:47:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:07.472 00:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:07.472 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.472 ************************************ 00:28:07.472 START TEST reap_unregistered_poller 00:28:07.472 ************************************ 00:28:07.472 00:47:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:07.472 * Looking for test storage... 00:28:07.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.472 00:47:40 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:28:07.472 00:47:40 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:28:07.472 00:47:40 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.472 00:47:40 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.472 00:47:40 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:28:07.472 00:47:40 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:07.472 00:47:40 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:28:07.472 00:47:40 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:28:07.472 00:47:40 -- common/autotest_common.sh@34 -- # set -e 00:28:07.472 00:47:40 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:28:07.472 00:47:40 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:28:07.472 00:47:40 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:28:07.472 00:47:40 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:28:07.472 00:47:40 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:28:07.472 00:47:40 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:28:07.472 00:47:40 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:28:07.472 00:47:40 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:28:07.472 00:47:40 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:28:07.472 00:47:40 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:28:07.472 00:47:40 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:28:07.472 00:47:40 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:28:07.472 00:47:40 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:28:07.472 00:47:40 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:28:07.472 00:47:40 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:28:07.472 00:47:40 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:28:07.472 00:47:40 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:28:07.472 00:47:40 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:28:07.472 00:47:40 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:28:07.472 00:47:40 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:28:07.472 00:47:40 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:28:07.472 00:47:40 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:28:07.472 00:47:40 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:28:07.472 00:47:40 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:07.472 00:47:40 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:28:07.472 00:47:40 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:28:07.472 00:47:40 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:28:07.472 00:47:40 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:28:07.472 00:47:40 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:28:07.472 00:47:40 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:28:07.472 00:47:40 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:28:07.472 00:47:40 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:28:07.472 00:47:40 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:28:07.472 00:47:40 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:28:07.472 00:47:40 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:28:07.472 00:47:40 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:28:07.472 00:47:40 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:28:07.472 00:47:40 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:28:07.472 00:47:40 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:28:07.472 00:47:40 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:28:07.472 00:47:40 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:28:07.472 00:47:40 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:28:07.472 00:47:40 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:28:07.472 00:47:40 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:28:07.472 00:47:40 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:28:07.472 00:47:40 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:28:07.472 00:47:40 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:28:07.472 00:47:40 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:28:07.472 00:47:40 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:28:07.472 00:47:40 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:28:07.472 00:47:40 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:28:07.472 00:47:40 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:28:07.472 00:47:40 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:28:07.472 00:47:40 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:28:07.472 00:47:40 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:28:07.472 00:47:40 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:28:07.472 00:47:40 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:28:07.472 00:47:40 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:28:07.472 00:47:40 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:28:07.472 00:47:40 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:28:07.472 00:47:40 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:28:07.472 00:47:40 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:28:07.472 00:47:40 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:28:07.472 00:47:40 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:28:07.472 00:47:40 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:28:07.472 00:47:40 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:28:07.472 00:47:40 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:28:07.472 00:47:40 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:28:07.472 00:47:40 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:28:07.472 00:47:40 -- common/build_config.sh@65 -- # CONFIG_SHARED=n 00:28:07.472 00:47:40 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=y 00:28:07.472 00:47:40 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:28:07.472 00:47:40 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:28:07.472 00:47:40 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:28:07.472 00:47:40 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:28:07.472 00:47:40 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:28:07.472 00:47:40 -- common/build_config.sh@72 -- # CONFIG_RAID5F=y 00:28:07.472 00:47:40 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:28:07.472 00:47:40 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:28:07.472 00:47:40 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:28:07.472 00:47:40 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:28:07.473 00:47:40 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:28:07.473 00:47:40 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:28:07.473 00:47:40 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:28:07.473 00:47:40 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:28:07.473 00:47:40 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:28:07.473 00:47:40 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:28:07.473 00:47:40 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:28:07.473 00:47:40 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:28:07.473 00:47:40 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:28:07.473 00:47:40 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:28:07.473 00:47:40 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:28:07.473 00:47:40 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:28:07.473 00:47:40 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:28:07.473 00:47:40 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:28:07.473 00:47:40 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:28:07.473 00:47:40 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:28:07.473 00:47:40 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:28:07.473 00:47:40 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:28:07.473 00:47:40 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:28:07.473 00:47:40 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:28:07.473 00:47:40 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:28:07.473 00:47:40 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:28:07.473 #define SPDK_CONFIG_H 00:28:07.473 #define SPDK_CONFIG_APPS 1 00:28:07.473 #define SPDK_CONFIG_ARCH native 00:28:07.473 #define SPDK_CONFIG_ASAN 1 00:28:07.473 #undef SPDK_CONFIG_AVAHI 00:28:07.473 #undef SPDK_CONFIG_CET 00:28:07.473 #define SPDK_CONFIG_COVERAGE 1 00:28:07.473 #define SPDK_CONFIG_CROSS_PREFIX 00:28:07.473 #undef SPDK_CONFIG_CRYPTO 00:28:07.473 #undef SPDK_CONFIG_CRYPTO_MLX5 00:28:07.473 #undef SPDK_CONFIG_CUSTOMOCF 00:28:07.473 #undef SPDK_CONFIG_DAOS 00:28:07.473 #define SPDK_CONFIG_DAOS_DIR 00:28:07.473 #define SPDK_CONFIG_DEBUG 1 00:28:07.473 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:28:07.473 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:28:07.473 #define SPDK_CONFIG_DPDK_INC_DIR 00:28:07.473 #define SPDK_CONFIG_DPDK_LIB_DIR 00:28:07.473 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:28:07.473 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:07.473 #define SPDK_CONFIG_EXAMPLES 1 00:28:07.473 #undef SPDK_CONFIG_FC 00:28:07.473 #define SPDK_CONFIG_FC_PATH 00:28:07.473 #define SPDK_CONFIG_FIO_PLUGIN 1 00:28:07.473 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:28:07.473 #undef SPDK_CONFIG_FUSE 00:28:07.473 #undef SPDK_CONFIG_FUZZER 00:28:07.473 #define SPDK_CONFIG_FUZZER_LIB 00:28:07.473 #undef SPDK_CONFIG_GOLANG 00:28:07.473 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:28:07.473 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:28:07.473 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:28:07.473 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:28:07.473 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:28:07.473 #undef SPDK_CONFIG_HAVE_LIBBSD 00:28:07.473 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:28:07.473 #define SPDK_CONFIG_IDXD 1 00:28:07.473 #undef SPDK_CONFIG_IDXD_KERNEL 00:28:07.473 #undef SPDK_CONFIG_IPSEC_MB 00:28:07.473 #define SPDK_CONFIG_IPSEC_MB_DIR 00:28:07.473 #define SPDK_CONFIG_ISAL 1 00:28:07.473 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:28:07.473 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:28:07.473 #define SPDK_CONFIG_LIBDIR 00:28:07.473 #undef SPDK_CONFIG_LTO 00:28:07.473 #define SPDK_CONFIG_MAX_LCORES 00:28:07.473 #define SPDK_CONFIG_NVME_CUSE 1 00:28:07.473 #undef SPDK_CONFIG_OCF 00:28:07.473 #define SPDK_CONFIG_OCF_PATH 00:28:07.473 #define SPDK_CONFIG_OPENSSL_PATH 00:28:07.473 #undef SPDK_CONFIG_PGO_CAPTURE 00:28:07.473 #define SPDK_CONFIG_PGO_DIR 00:28:07.473 #undef SPDK_CONFIG_PGO_USE 00:28:07.473 #define SPDK_CONFIG_PREFIX /usr/local 00:28:07.473 #define SPDK_CONFIG_RAID5F 1 00:28:07.473 #undef SPDK_CONFIG_RBD 00:28:07.473 #define SPDK_CONFIG_RDMA 1 00:28:07.473 #define SPDK_CONFIG_RDMA_PROV verbs 00:28:07.473 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:28:07.473 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:28:07.473 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:28:07.473 #undef SPDK_CONFIG_SHARED 00:28:07.473 #undef SPDK_CONFIG_SMA 00:28:07.473 #define SPDK_CONFIG_TESTS 1 00:28:07.473 #undef SPDK_CONFIG_TSAN 00:28:07.473 #undef SPDK_CONFIG_UBLK 00:28:07.473 #define SPDK_CONFIG_UBSAN 1 00:28:07.473 #define SPDK_CONFIG_UNIT_TESTS 1 00:28:07.473 #undef SPDK_CONFIG_URING 00:28:07.473 #define SPDK_CONFIG_URING_PATH 00:28:07.473 #undef SPDK_CONFIG_URING_ZNS 00:28:07.473 #undef SPDK_CONFIG_USDT 00:28:07.473 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:28:07.473 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:28:07.473 #undef SPDK_CONFIG_VFIO_USER 00:28:07.473 #define SPDK_CONFIG_VFIO_USER_DIR 00:28:07.473 #define SPDK_CONFIG_VHOST 1 00:28:07.473 #define SPDK_CONFIG_VIRTIO 1 00:28:07.473 #undef SPDK_CONFIG_VTUNE 00:28:07.473 #define SPDK_CONFIG_VTUNE_DIR 00:28:07.473 #define SPDK_CONFIG_WERROR 1 00:28:07.473 #define SPDK_CONFIG_WPDK_DIR 00:28:07.473 #undef SPDK_CONFIG_XNVME 00:28:07.473 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:28:07.473 00:47:40 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:28:07.473 00:47:40 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:07.473 00:47:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.473 00:47:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.473 00:47:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.473 00:47:40 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:07.473 00:47:40 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:07.473 00:47:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:07.473 00:47:40 -- paths/export.sh@5 -- # export PATH 00:28:07.473 00:47:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:07.473 00:47:40 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:28:07.473 00:47:40 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:28:07.473 00:47:40 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:28:07.473 00:47:40 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:28:07.473 00:47:40 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:28:07.473 00:47:40 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:28:07.473 00:47:40 -- pm/common@67 -- # TEST_TAG=N/A 00:28:07.473 00:47:40 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:28:07.473 00:47:40 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:28:07.473 00:47:40 -- pm/common@71 -- # uname -s 00:28:07.473 00:47:40 -- pm/common@71 -- # PM_OS=Linux 00:28:07.473 00:47:40 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:28:07.473 00:47:40 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:28:07.473 00:47:40 -- pm/common@76 -- # [[ Linux == Linux ]] 00:28:07.473 00:47:40 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:28:07.473 00:47:40 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:28:07.473 00:47:40 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:28:07.473 00:47:40 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:28:07.473 00:47:40 -- common/autotest_common.sh@57 -- # : 0 00:28:07.473 00:47:40 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:28:07.473 00:47:40 -- common/autotest_common.sh@61 -- # : 0 00:28:07.473 00:47:40 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:28:07.473 00:47:40 -- common/autotest_common.sh@63 -- # : 0 00:28:07.473 00:47:40 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:28:07.473 00:47:40 -- common/autotest_common.sh@65 -- # : 1 00:28:07.473 00:47:40 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:28:07.473 00:47:40 -- common/autotest_common.sh@67 -- # : 1 00:28:07.473 00:47:40 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:28:07.473 00:47:40 -- common/autotest_common.sh@69 -- # : 00:28:07.473 00:47:40 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:28:07.474 00:47:40 -- common/autotest_common.sh@71 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:28:07.474 00:47:40 -- common/autotest_common.sh@73 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:28:07.474 00:47:40 -- common/autotest_common.sh@75 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:28:07.474 00:47:40 -- common/autotest_common.sh@77 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:28:07.474 00:47:40 -- common/autotest_common.sh@79 -- # : 1 00:28:07.474 00:47:40 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:28:07.474 00:47:40 -- common/autotest_common.sh@81 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:28:07.474 00:47:40 -- common/autotest_common.sh@83 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:28:07.474 00:47:40 -- common/autotest_common.sh@85 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:28:07.474 00:47:40 -- common/autotest_common.sh@87 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:28:07.474 00:47:40 -- common/autotest_common.sh@89 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:28:07.474 00:47:40 -- common/autotest_common.sh@91 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:28:07.474 00:47:40 -- common/autotest_common.sh@93 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:28:07.474 00:47:40 -- common/autotest_common.sh@95 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:28:07.474 00:47:40 -- common/autotest_common.sh@97 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:28:07.474 00:47:40 -- common/autotest_common.sh@99 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:28:07.474 00:47:40 -- common/autotest_common.sh@101 -- # : rdma 00:28:07.474 00:47:40 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:28:07.474 00:47:40 -- common/autotest_common.sh@103 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:28:07.474 00:47:40 -- common/autotest_common.sh@105 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:28:07.474 00:47:40 -- common/autotest_common.sh@107 -- # : 1 00:28:07.474 00:47:40 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:28:07.474 00:47:40 -- common/autotest_common.sh@109 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:28:07.474 00:47:40 -- common/autotest_common.sh@111 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:28:07.474 00:47:40 -- common/autotest_common.sh@113 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:28:07.474 00:47:40 -- common/autotest_common.sh@115 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:28:07.474 00:47:40 -- common/autotest_common.sh@117 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:28:07.474 00:47:40 -- common/autotest_common.sh@119 -- # : 1 00:28:07.474 00:47:40 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:28:07.474 00:47:40 -- common/autotest_common.sh@121 -- # : 1 00:28:07.474 00:47:40 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:28:07.474 00:47:40 -- common/autotest_common.sh@123 -- # : 00:28:07.474 00:47:40 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:28:07.474 00:47:40 -- common/autotest_common.sh@125 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:28:07.474 00:47:40 -- common/autotest_common.sh@127 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:28:07.474 00:47:40 -- common/autotest_common.sh@129 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:28:07.474 00:47:40 -- common/autotest_common.sh@131 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:28:07.474 00:47:40 -- common/autotest_common.sh@133 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:28:07.474 00:47:40 -- common/autotest_common.sh@135 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:28:07.474 00:47:40 -- common/autotest_common.sh@137 -- # : 00:28:07.474 00:47:40 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:28:07.474 00:47:40 -- common/autotest_common.sh@139 -- # : true 00:28:07.474 00:47:40 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:28:07.474 00:47:40 -- common/autotest_common.sh@141 -- # : 1 00:28:07.474 00:47:40 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:28:07.474 00:47:40 -- common/autotest_common.sh@143 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:28:07.474 00:47:40 -- common/autotest_common.sh@145 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:28:07.474 00:47:40 -- common/autotest_common.sh@147 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:28:07.474 00:47:40 -- common/autotest_common.sh@149 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:28:07.474 00:47:40 -- common/autotest_common.sh@151 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:28:07.474 00:47:40 -- common/autotest_common.sh@153 -- # : 00:28:07.474 00:47:40 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:28:07.474 00:47:40 -- common/autotest_common.sh@155 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:28:07.474 00:47:40 -- common/autotest_common.sh@157 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:28:07.474 00:47:40 -- common/autotest_common.sh@159 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:28:07.474 00:47:40 -- common/autotest_common.sh@161 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:28:07.474 00:47:40 -- common/autotest_common.sh@163 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:28:07.474 00:47:40 -- common/autotest_common.sh@166 -- # : 00:28:07.474 00:47:40 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:28:07.474 00:47:40 -- common/autotest_common.sh@168 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:28:07.474 00:47:40 -- common/autotest_common.sh@170 -- # : 0 00:28:07.474 00:47:40 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:28:07.474 00:47:40 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:28:07.474 00:47:40 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:28:07.474 00:47:40 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:28:07.474 00:47:40 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:28:07.474 00:47:40 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:07.474 00:47:40 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:07.474 00:47:40 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:07.475 00:47:40 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:28:07.475 00:47:40 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:28:07.475 00:47:40 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:28:07.475 00:47:40 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:07.475 00:47:40 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:07.475 00:47:40 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:28:07.475 00:47:40 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:28:07.475 00:47:40 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:07.475 00:47:40 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:07.475 00:47:40 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:07.475 00:47:40 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:07.475 00:47:40 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:28:07.475 00:47:40 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:28:07.475 00:47:40 -- common/autotest_common.sh@199 -- # cat 00:28:07.475 00:47:40 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:28:07.475 00:47:40 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:07.475 00:47:40 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:07.475 00:47:40 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:07.475 00:47:40 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:07.475 00:47:40 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:28:07.475 00:47:40 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:28:07.475 00:47:40 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:28:07.475 00:47:40 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:28:07.475 00:47:40 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:28:07.475 00:47:40 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:28:07.475 00:47:40 -- common/autotest_common.sh@242 -- # export QEMU_BIN= 00:28:07.475 00:47:40 -- common/autotest_common.sh@242 -- # QEMU_BIN= 00:28:07.475 00:47:40 -- common/autotest_common.sh@243 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:28:07.475 00:47:40 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:28:07.475 00:47:40 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:28:07.475 00:47:40 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:28:07.475 00:47:40 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:07.475 00:47:40 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:07.475 00:47:40 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:28:07.475 00:47:40 -- common/autotest_common.sh@252 -- # export valgrind= 00:28:07.475 00:47:40 -- common/autotest_common.sh@252 -- # valgrind= 00:28:07.475 00:47:40 -- common/autotest_common.sh@258 -- # uname -s 00:28:07.475 00:47:40 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:28:07.475 00:47:40 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:28:07.475 00:47:40 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:28:07.475 00:47:40 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:28:07.475 00:47:40 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:28:07.475 00:47:40 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:28:07.475 00:47:40 -- common/autotest_common.sh@268 -- # MAKE=make 00:28:07.475 00:47:40 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:28:07.475 00:47:40 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:28:07.475 00:47:40 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:28:07.475 00:47:40 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:28:07.475 00:47:40 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:28:07.475 00:47:40 -- common/autotest_common.sh@307 -- # [[ -z 141011 ]] 00:28:07.475 00:47:40 -- common/autotest_common.sh@307 -- # kill -0 141011 00:28:07.475 00:47:40 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:28:07.475 00:47:40 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:28:07.475 00:47:40 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:28:07.475 00:47:40 -- common/autotest_common.sh@320 -- # local mount target_dir 00:28:07.475 00:47:40 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:28:07.475 00:47:40 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:28:07.475 00:47:40 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:28:07.475 00:47:40 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:28:07.475 00:47:40 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.eqj9Oz 00:28:07.475 00:47:40 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:28:07.475 00:47:40 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:28:07.475 00:47:40 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:28:07.475 00:47:40 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.eqj9Oz/tests/interrupt /tmp/spdk.eqj9Oz 00:28:07.475 00:47:40 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:28:07.475 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.475 00:47:40 -- common/autotest_common.sh@316 -- # df -T 00:28:07.475 00:47:40 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=1248956416 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253683200 00:28:07.475 00:47:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=4726784 00:28:07.475 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=10271625216 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:28:07.475 00:47:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=10328391680 00:28:07.475 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=6265786368 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6268399616 00:28:07.475 00:47:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:28:07.475 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:28:07.475 00:47:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:28:07.475 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda15 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=103061504 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109395968 00:28:07.475 00:47:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=6334464 00:28:07.475 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253675008 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253679104 00:28:07.475 00:47:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:28:07.475 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:28:07.475 00:47:40 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # avails["$mount"]=97097527296 00:28:07.475 00:47:40 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:28:07.475 00:47:40 -- common/autotest_common.sh@352 -- # uses["$mount"]=2605252608 00:28:07.476 00:47:40 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:28:07.476 00:47:40 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:28:07.476 * Looking for test storage... 00:28:07.476 00:47:40 -- common/autotest_common.sh@357 -- # local target_space new_size 00:28:07.476 00:47:40 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:28:07.476 00:47:40 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.476 00:47:40 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:28:07.476 00:47:40 -- common/autotest_common.sh@361 -- # mount=/ 00:28:07.476 00:47:40 -- common/autotest_common.sh@363 -- # target_space=10271625216 00:28:07.476 00:47:40 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:28:07.476 00:47:40 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:28:07.476 00:47:40 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:28:07.476 00:47:40 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:28:07.476 00:47:40 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:28:07.476 00:47:40 -- common/autotest_common.sh@370 -- # new_size=12542984192 00:28:07.476 00:47:40 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:28:07.476 00:47:40 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.476 00:47:40 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.476 00:47:40 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:28:07.476 00:47:40 -- common/autotest_common.sh@378 -- # return 0 00:28:07.476 00:47:40 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:28:07.476 00:47:40 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:28:07.476 00:47:40 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:28:07.476 00:47:40 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:28:07.476 00:47:40 -- common/autotest_common.sh@1673 -- # true 00:28:07.476 00:47:40 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:28:07.476 00:47:40 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:28:07.476 00:47:40 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:28:07.476 00:47:40 -- common/autotest_common.sh@27 -- # exec 00:28:07.476 00:47:40 -- common/autotest_common.sh@29 -- # exec 00:28:07.476 00:47:40 -- common/autotest_common.sh@31 -- # xtrace_restore 00:28:07.476 00:47:40 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:28:07.476 00:47:40 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:28:07.476 00:47:40 -- common/autotest_common.sh@18 -- # set -x 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:28:07.476 00:47:40 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:28:07.476 00:47:40 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:28:07.476 00:47:40 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=141055 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:07.476 00:47:40 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 141055 /var/tmp/spdk.sock 00:28:07.476 00:47:40 -- common/autotest_common.sh@817 -- # '[' -z 141055 ']' 00:28:07.476 00:47:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.476 00:47:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:07.476 00:47:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.476 00:47:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:07.476 00:47:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.476 [2024-04-27 00:47:41.024297] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:07.476 [2024-04-27 00:47:41.024509] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141055 ] 00:28:07.734 [2024-04-27 00:47:41.200742] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:07.992 [2024-04-27 00:47:41.383377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.992 [2024-04-27 00:47:41.383556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.992 [2024-04-27 00:47:41.383564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.250 [2024-04-27 00:47:41.647060] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:08.508 00:47:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:08.508 00:47:41 -- common/autotest_common.sh@850 -- # return 0 00:28:08.508 00:47:41 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:28:08.508 00:47:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.508 00:47:41 -- common/autotest_common.sh@10 -- # set +x 00:28:08.508 00:47:41 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:28:08.508 00:47:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.508 00:47:41 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:28:08.508 "name": "app_thread", 00:28:08.508 "id": 1, 00:28:08.508 "active_pollers": [], 00:28:08.508 "timed_pollers": [ 00:28:08.508 { 00:28:08.508 "name": "rpc_subsystem_poll_servers", 00:28:08.508 "id": 1, 00:28:08.508 "state": "waiting", 00:28:08.508 "run_count": 0, 00:28:08.508 "busy_count": 0, 00:28:08.508 "period_ticks": 8800000 00:28:08.508 } 00:28:08.508 ], 00:28:08.508 "paused_pollers": [] 00:28:08.508 }' 00:28:08.508 00:47:41 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:28:08.508 00:47:41 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:28:08.508 00:47:41 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:28:08.508 00:47:41 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:28:08.508 00:47:42 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:28:08.508 00:47:42 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:28:08.508 00:47:42 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:28:08.508 00:47:42 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:08.508 00:47:42 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:08.508 5000+0 records in 00:28:08.508 5000+0 records out 00:28:08.508 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0242091 s, 423 MB/s 00:28:08.508 00:47:42 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:28:09.099 AIO0 00:28:09.099 00:47:42 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:09.099 00:47:42 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:28:09.359 00:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.359 00:47:42 -- common/autotest_common.sh@10 -- # set +x 00:28:09.359 00:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:28:09.359 "name": "app_thread", 00:28:09.359 "id": 1, 00:28:09.359 "active_pollers": [], 00:28:09.359 "timed_pollers": [ 00:28:09.359 { 00:28:09.359 "name": "rpc_subsystem_poll_servers", 00:28:09.359 "id": 1, 00:28:09.359 "state": "waiting", 00:28:09.359 "run_count": 0, 00:28:09.359 "busy_count": 0, 00:28:09.359 "period_ticks": 8800000 00:28:09.359 } 00:28:09.359 ], 00:28:09.359 "paused_pollers": [] 00:28:09.359 }' 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:28:09.359 00:47:42 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 141055 00:28:09.359 00:47:42 -- common/autotest_common.sh@936 -- # '[' -z 141055 ']' 00:28:09.359 00:47:42 -- common/autotest_common.sh@940 -- # kill -0 141055 00:28:09.359 00:47:42 -- common/autotest_common.sh@941 -- # uname 00:28:09.359 00:47:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:09.359 00:47:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 141055 00:28:09.359 00:47:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:09.359 00:47:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:09.359 killing process with pid 141055 00:28:09.359 00:47:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 141055' 00:28:09.359 00:47:42 -- common/autotest_common.sh@955 -- # kill 141055 00:28:09.359 00:47:42 -- common/autotest_common.sh@960 -- # wait 141055 00:28:10.735 00:47:44 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:28:10.735 00:47:44 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:28:10.735 00:28:10.735 real 0m3.313s 00:28:10.735 user 0m2.720s 00:28:10.735 sys 0m0.543s 00:28:10.735 00:47:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:10.735 00:47:44 -- common/autotest_common.sh@10 -- # set +x 00:28:10.735 ************************************ 00:28:10.735 END TEST reap_unregistered_poller 00:28:10.735 ************************************ 00:28:10.735 00:47:44 -- spdk/autotest.sh@194 -- # uname -s 00:28:10.735 00:47:44 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:28:10.735 00:47:44 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:28:10.735 00:47:44 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:28:10.735 00:47:44 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:28:10.735 00:47:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:10.735 00:47:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:10.735 00:47:44 -- common/autotest_common.sh@10 -- # set +x 00:28:10.735 ************************************ 00:28:10.735 START TEST spdk_dd 00:28:10.735 ************************************ 00:28:10.735 00:47:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:28:10.735 * Looking for test storage... 00:28:10.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:10.735 00:47:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:10.735 00:47:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.735 00:47:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.735 00:47:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.735 00:47:44 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:10.735 00:47:44 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:10.735 00:47:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:10.735 00:47:44 -- paths/export.sh@5 -- # export PATH 00:28:10.735 00:47:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:10.735 00:47:44 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:10.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:10.994 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:11.929 00:47:45 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:28:11.929 00:47:45 -- dd/dd.sh@11 -- # nvme_in_userspace 00:28:11.929 00:47:45 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:11.929 00:47:45 -- scripts/common.sh@310 -- # local nvmes 00:28:11.929 00:47:45 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:28:11.929 00:47:45 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:11.929 00:47:45 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:28:11.929 00:47:45 -- scripts/common.sh@295 -- # local bdf= 00:28:11.929 00:47:45 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:28:11.929 00:47:45 -- scripts/common.sh@230 -- # local class 00:28:11.929 00:47:45 -- scripts/common.sh@231 -- # local subclass 00:28:11.929 00:47:45 -- scripts/common.sh@232 -- # local progif 00:28:12.188 00:47:45 -- scripts/common.sh@233 -- # printf %02x 1 00:28:12.188 00:47:45 -- scripts/common.sh@233 -- # class=01 00:28:12.188 00:47:45 -- scripts/common.sh@234 -- # printf %02x 8 00:28:12.188 00:47:45 -- scripts/common.sh@234 -- # subclass=08 00:28:12.188 00:47:45 -- scripts/common.sh@235 -- # printf %02x 2 00:28:12.188 00:47:45 -- scripts/common.sh@235 -- # progif=02 00:28:12.188 00:47:45 -- scripts/common.sh@237 -- # hash lspci 00:28:12.188 00:47:45 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:28:12.188 00:47:45 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:28:12.188 00:47:45 -- scripts/common.sh@240 -- # grep -i -- -p02 00:28:12.188 00:47:45 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:12.188 00:47:45 -- scripts/common.sh@242 -- # tr -d '"' 00:28:12.188 00:47:45 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:12.188 00:47:45 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:28:12.188 00:47:45 -- scripts/common.sh@15 -- # local i 00:28:12.188 00:47:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:28:12.188 00:47:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:12.188 00:47:45 -- scripts/common.sh@24 -- # return 0 00:28:12.188 00:47:45 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:28:12.188 00:47:45 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:12.188 00:47:45 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:12.188 00:47:45 -- scripts/common.sh@320 -- # uname -s 00:28:12.188 00:47:45 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:12.188 00:47:45 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:12.188 00:47:45 -- scripts/common.sh@325 -- # (( 1 )) 00:28:12.188 00:47:45 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:28:12.188 00:47:45 -- dd/dd.sh@13 -- # check_liburing 00:28:12.188 00:47:45 -- dd/common.sh@139 -- # local lib so 00:28:12.188 00:47:45 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:28:12.188 00:47:45 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:28:12.188 00:47:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:28:12.188 00:47:45 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:28:12.188 00:47:45 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:28:12.188 00:47:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:12.188 00:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:12.188 00:47:45 -- common/autotest_common.sh@10 -- # set +x 00:28:12.188 ************************************ 00:28:12.188 START TEST spdk_dd_basic_rw 00:28:12.188 ************************************ 00:28:12.188 00:47:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:28:12.188 * Looking for test storage... 00:28:12.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:12.189 00:47:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:12.189 00:47:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.189 00:47:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.189 00:47:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.189 00:47:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.189 00:47:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.189 00:47:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.189 00:47:45 -- paths/export.sh@5 -- # export PATH 00:28:12.189 00:47:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:12.189 00:47:45 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:28:12.189 00:47:45 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:28:12.189 00:47:45 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:28:12.189 00:47:45 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:28:12.189 00:47:45 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:28:12.189 00:47:45 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:28:12.189 00:47:45 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:28:12.189 00:47:45 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:12.189 00:47:45 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:12.189 00:47:45 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:28:12.189 00:47:45 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:28:12.189 00:47:45 -- dd/common.sh@126 -- # mapfile -t id 00:28:12.189 00:47:45 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:28:12.450 00:47:45 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2298 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:28:12.450 00:47:45 -- dd/common.sh@130 -- # lbaf=04 00:28:12.451 00:47:45 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 105 Data Units Written: 7 Host Read Commands: 2298 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:28:12.451 00:47:45 -- dd/common.sh@132 -- # lbaf=4096 00:28:12.451 00:47:45 -- dd/common.sh@134 -- # echo 4096 00:28:12.451 00:47:45 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:28:12.451 00:47:45 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:12.451 00:47:45 -- dd/basic_rw.sh@96 -- # : 00:28:12.451 00:47:45 -- dd/basic_rw.sh@96 -- # gen_conf 00:28:12.451 00:47:45 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:28:12.451 00:47:45 -- dd/common.sh@31 -- # xtrace_disable 00:28:12.451 00:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:12.451 00:47:45 -- common/autotest_common.sh@10 -- # set +x 00:28:12.451 00:47:45 -- common/autotest_common.sh@10 -- # set +x 00:28:12.451 ************************************ 00:28:12.451 START TEST dd_bs_lt_native_bs 00:28:12.451 ************************************ 00:28:12.451 00:47:46 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:12.710 00:47:46 -- common/autotest_common.sh@638 -- # local es=0 00:28:12.710 00:47:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:12.710 00:47:46 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.710 00:47:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:12.710 00:47:46 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.710 00:47:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:12.710 00:47:46 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.710 00:47:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:12.710 00:47:46 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:12.710 00:47:46 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:12.710 00:47:46 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:28:12.710 { 00:28:12.710 "subsystems": [ 00:28:12.710 { 00:28:12.710 "subsystem": "bdev", 00:28:12.710 "config": [ 00:28:12.710 { 00:28:12.710 "params": { 00:28:12.710 "trtype": "pcie", 00:28:12.710 "traddr": "0000:00:10.0", 00:28:12.710 "name": "Nvme0" 00:28:12.710 }, 00:28:12.710 "method": "bdev_nvme_attach_controller" 00:28:12.710 }, 00:28:12.710 { 00:28:12.710 "method": "bdev_wait_for_examine" 00:28:12.710 } 00:28:12.710 ] 00:28:12.710 } 00:28:12.710 ] 00:28:12.710 } 00:28:12.710 [2024-04-27 00:47:46.110156] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:12.710 [2024-04-27 00:47:46.110417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141383 ] 00:28:12.710 [2024-04-27 00:47:46.281423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.970 [2024-04-27 00:47:46.535680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.535 [2024-04-27 00:47:46.911129] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:28:13.535 [2024-04-27 00:47:46.911282] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:14.102 [2024-04-27 00:47:47.609945] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:28:14.669 00:47:47 -- common/autotest_common.sh@641 -- # es=234 00:28:14.669 00:47:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:14.669 00:47:47 -- common/autotest_common.sh@650 -- # es=106 00:28:14.669 00:47:47 -- common/autotest_common.sh@651 -- # case "$es" in 00:28:14.669 00:47:47 -- common/autotest_common.sh@658 -- # es=1 00:28:14.669 00:47:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:14.669 00:28:14.669 real 0m1.956s 00:28:14.669 user 0m1.604s 00:28:14.669 sys 0m0.315s 00:28:14.669 00:47:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:14.669 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:28:14.669 ************************************ 00:28:14.669 END TEST dd_bs_lt_native_bs 00:28:14.669 ************************************ 00:28:14.669 00:47:48 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:28:14.669 00:47:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:14.669 00:47:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:14.669 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:28:14.669 ************************************ 00:28:14.669 START TEST dd_rw 00:28:14.669 ************************************ 00:28:14.669 00:47:48 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:28:14.669 00:47:48 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:28:14.669 00:47:48 -- dd/basic_rw.sh@12 -- # local count size 00:28:14.669 00:47:48 -- dd/basic_rw.sh@13 -- # local qds bss 00:28:14.669 00:47:48 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:28:14.669 00:47:48 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:28:14.669 00:47:48 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:28:14.669 00:47:48 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:28:14.669 00:47:48 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:28:14.669 00:47:48 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:28:14.669 00:47:48 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:28:14.669 00:47:48 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:28:14.669 00:47:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:14.669 00:47:48 -- dd/basic_rw.sh@23 -- # count=15 00:28:14.669 00:47:48 -- dd/basic_rw.sh@24 -- # count=15 00:28:14.669 00:47:48 -- dd/basic_rw.sh@25 -- # size=61440 00:28:14.669 00:47:48 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:28:14.669 00:47:48 -- dd/common.sh@98 -- # xtrace_disable 00:28:14.669 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.237 00:47:48 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:28:15.237 00:47:48 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:15.237 00:47:48 -- dd/common.sh@31 -- # xtrace_disable 00:28:15.237 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:28:15.237 { 00:28:15.237 "subsystems": [ 00:28:15.237 { 00:28:15.237 "subsystem": "bdev", 00:28:15.237 "config": [ 00:28:15.237 { 00:28:15.237 "params": { 00:28:15.237 "trtype": "pcie", 00:28:15.237 "traddr": "0000:00:10.0", 00:28:15.237 "name": "Nvme0" 00:28:15.237 }, 00:28:15.237 "method": "bdev_nvme_attach_controller" 00:28:15.237 }, 00:28:15.237 { 00:28:15.237 "method": "bdev_wait_for_examine" 00:28:15.237 } 00:28:15.237 ] 00:28:15.237 } 00:28:15.237 ] 00:28:15.237 } 00:28:15.237 [2024-04-27 00:47:48.742315] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:15.237 [2024-04-27 00:47:48.742749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141441 ] 00:28:15.496 [2024-04-27 00:47:48.910063] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.755 [2024-04-27 00:47:49.106036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.968  Copying: 60/60 [kB] (average 19 MBps) 00:28:16.968 00:28:16.968 00:47:50 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:28:16.968 00:47:50 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:16.968 00:47:50 -- dd/common.sh@31 -- # xtrace_disable 00:28:16.968 00:47:50 -- common/autotest_common.sh@10 -- # set +x 00:28:16.968 { 00:28:16.968 "subsystems": [ 00:28:16.968 { 00:28:16.968 "subsystem": "bdev", 00:28:16.968 "config": [ 00:28:16.968 { 00:28:16.968 "params": { 00:28:16.968 "trtype": "pcie", 00:28:16.968 "traddr": "0000:00:10.0", 00:28:16.968 "name": "Nvme0" 00:28:16.968 }, 00:28:16.968 "method": "bdev_nvme_attach_controller" 00:28:16.968 }, 00:28:16.968 { 00:28:16.968 "method": "bdev_wait_for_examine" 00:28:16.968 } 00:28:16.968 ] 00:28:16.968 } 00:28:16.968 ] 00:28:16.968 } 00:28:16.968 [2024-04-27 00:47:50.525653] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:16.968 [2024-04-27 00:47:50.526066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141471 ] 00:28:17.226 [2024-04-27 00:47:50.698193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.484 [2024-04-27 00:47:50.925904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.118  Copying: 60/60 [kB] (average 19 MBps) 00:28:19.118 00:28:19.118 00:47:52 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:19.118 00:47:52 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:28:19.118 00:47:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:19.118 00:47:52 -- dd/common.sh@11 -- # local nvme_ref= 00:28:19.118 00:47:52 -- dd/common.sh@12 -- # local size=61440 00:28:19.118 00:47:52 -- dd/common.sh@14 -- # local bs=1048576 00:28:19.118 00:47:52 -- dd/common.sh@15 -- # local count=1 00:28:19.118 00:47:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:19.118 00:47:52 -- dd/common.sh@18 -- # gen_conf 00:28:19.118 00:47:52 -- dd/common.sh@31 -- # xtrace_disable 00:28:19.118 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:28:19.118 { 00:28:19.118 "subsystems": [ 00:28:19.118 { 00:28:19.118 "subsystem": "bdev", 00:28:19.118 "config": [ 00:28:19.118 { 00:28:19.118 "params": { 00:28:19.118 "trtype": "pcie", 00:28:19.118 "traddr": "0000:00:10.0", 00:28:19.118 "name": "Nvme0" 00:28:19.118 }, 00:28:19.118 "method": "bdev_nvme_attach_controller" 00:28:19.118 }, 00:28:19.118 { 00:28:19.118 "method": "bdev_wait_for_examine" 00:28:19.118 } 00:28:19.118 ] 00:28:19.118 } 00:28:19.118 ] 00:28:19.118 } 00:28:19.118 [2024-04-27 00:47:52.422909] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:19.118 [2024-04-27 00:47:52.423338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141499 ] 00:28:19.118 [2024-04-27 00:47:52.589387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.377 [2024-04-27 00:47:52.796135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.569  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:20.569 00:28:20.569 00:47:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:20.569 00:47:54 -- dd/basic_rw.sh@23 -- # count=15 00:28:20.569 00:47:54 -- dd/basic_rw.sh@24 -- # count=15 00:28:20.569 00:47:54 -- dd/basic_rw.sh@25 -- # size=61440 00:28:20.569 00:47:54 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:28:20.569 00:47:54 -- dd/common.sh@98 -- # xtrace_disable 00:28:20.569 00:47:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.136 00:47:54 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:28:21.136 00:47:54 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:21.136 00:47:54 -- dd/common.sh@31 -- # xtrace_disable 00:28:21.136 00:47:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.395 [2024-04-27 00:47:54.763721] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:21.395 [2024-04-27 00:47:54.764575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141531 ] 00:28:21.395 { 00:28:21.395 "subsystems": [ 00:28:21.395 { 00:28:21.395 "subsystem": "bdev", 00:28:21.395 "config": [ 00:28:21.395 { 00:28:21.395 "params": { 00:28:21.395 "trtype": "pcie", 00:28:21.395 "traddr": "0000:00:10.0", 00:28:21.395 "name": "Nvme0" 00:28:21.395 }, 00:28:21.395 "method": "bdev_nvme_attach_controller" 00:28:21.395 }, 00:28:21.395 { 00:28:21.395 "method": "bdev_wait_for_examine" 00:28:21.395 } 00:28:21.395 ] 00:28:21.395 } 00:28:21.395 ] 00:28:21.395 } 00:28:21.395 [2024-04-27 00:47:54.920846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.653 [2024-04-27 00:47:55.112525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.286  Copying: 60/60 [kB] (average 58 MBps) 00:28:23.286 00:28:23.286 00:47:56 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:28:23.286 00:47:56 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:23.286 00:47:56 -- dd/common.sh@31 -- # xtrace_disable 00:28:23.286 00:47:56 -- common/autotest_common.sh@10 -- # set +x 00:28:23.286 { 00:28:23.286 "subsystems": [ 00:28:23.286 { 00:28:23.286 "subsystem": "bdev", 00:28:23.286 "config": [ 00:28:23.286 { 00:28:23.286 "params": { 00:28:23.286 "trtype": "pcie", 00:28:23.286 "traddr": "0000:00:10.0", 00:28:23.286 "name": "Nvme0" 00:28:23.286 }, 00:28:23.286 "method": "bdev_nvme_attach_controller" 00:28:23.286 }, 00:28:23.286 { 00:28:23.286 "method": "bdev_wait_for_examine" 00:28:23.286 } 00:28:23.286 ] 00:28:23.286 } 00:28:23.286 ] 00:28:23.286 } 00:28:23.286 [2024-04-27 00:47:56.558931] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:23.286 [2024-04-27 00:47:56.559327] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141559 ] 00:28:23.286 [2024-04-27 00:47:56.726499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.544 [2024-04-27 00:47:56.922802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.740  Copying: 60/60 [kB] (average 58 MBps) 00:28:24.740 00:28:24.740 00:47:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:24.740 00:47:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:28:24.740 00:47:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:24.740 00:47:58 -- dd/common.sh@11 -- # local nvme_ref= 00:28:24.740 00:47:58 -- dd/common.sh@12 -- # local size=61440 00:28:24.740 00:47:58 -- dd/common.sh@14 -- # local bs=1048576 00:28:24.740 00:47:58 -- dd/common.sh@15 -- # local count=1 00:28:24.740 00:47:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:24.740 00:47:58 -- dd/common.sh@18 -- # gen_conf 00:28:24.740 00:47:58 -- dd/common.sh@31 -- # xtrace_disable 00:28:24.740 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:28:24.740 { 00:28:24.740 "subsystems": [ 00:28:24.740 { 00:28:24.740 "subsystem": "bdev", 00:28:24.740 "config": [ 00:28:24.740 { 00:28:24.740 "params": { 00:28:24.740 "trtype": "pcie", 00:28:24.740 "traddr": "0000:00:10.0", 00:28:24.740 "name": "Nvme0" 00:28:24.740 }, 00:28:24.740 "method": "bdev_nvme_attach_controller" 00:28:24.740 }, 00:28:24.740 { 00:28:24.740 "method": "bdev_wait_for_examine" 00:28:24.740 } 00:28:24.740 ] 00:28:24.740 } 00:28:24.740 ] 00:28:24.740 } 00:28:24.740 [2024-04-27 00:47:58.295573] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:24.740 [2024-04-27 00:47:58.296246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141591 ] 00:28:24.999 [2024-04-27 00:47:58.466371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.257 [2024-04-27 00:47:58.642149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.448  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:26.448 00:28:26.448 00:48:00 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:28:26.448 00:48:00 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:26.448 00:48:00 -- dd/basic_rw.sh@23 -- # count=7 00:28:26.448 00:48:00 -- dd/basic_rw.sh@24 -- # count=7 00:28:26.448 00:48:00 -- dd/basic_rw.sh@25 -- # size=57344 00:28:26.448 00:48:00 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:28:26.448 00:48:00 -- dd/common.sh@98 -- # xtrace_disable 00:28:26.448 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.014 00:48:00 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:28:27.014 00:48:00 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:27.014 00:48:00 -- dd/common.sh@31 -- # xtrace_disable 00:28:27.014 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:28:27.272 { 00:28:27.272 "subsystems": [ 00:28:27.272 { 00:28:27.272 "subsystem": "bdev", 00:28:27.272 "config": [ 00:28:27.272 { 00:28:27.272 "params": { 00:28:27.272 "trtype": "pcie", 00:28:27.272 "traddr": "0000:00:10.0", 00:28:27.272 "name": "Nvme0" 00:28:27.273 }, 00:28:27.273 "method": "bdev_nvme_attach_controller" 00:28:27.273 }, 00:28:27.273 { 00:28:27.273 "method": "bdev_wait_for_examine" 00:28:27.273 } 00:28:27.273 ] 00:28:27.273 } 00:28:27.273 ] 00:28:27.273 } 00:28:27.273 [2024-04-27 00:48:00.646145] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:27.273 [2024-04-27 00:48:00.647074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141624 ] 00:28:27.273 [2024-04-27 00:48:00.817846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.531 [2024-04-27 00:48:01.018914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.165  Copying: 56/56 [kB] (average 27 MBps) 00:28:29.165 00:28:29.165 00:48:02 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:28:29.165 00:48:02 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:29.165 00:48:02 -- dd/common.sh@31 -- # xtrace_disable 00:28:29.165 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:28:29.165 { 00:28:29.165 "subsystems": [ 00:28:29.165 { 00:28:29.165 "subsystem": "bdev", 00:28:29.165 "config": [ 00:28:29.165 { 00:28:29.165 "params": { 00:28:29.165 "trtype": "pcie", 00:28:29.165 "traddr": "0000:00:10.0", 00:28:29.165 "name": "Nvme0" 00:28:29.165 }, 00:28:29.165 "method": "bdev_nvme_attach_controller" 00:28:29.165 }, 00:28:29.165 { 00:28:29.165 "method": "bdev_wait_for_examine" 00:28:29.165 } 00:28:29.165 ] 00:28:29.165 } 00:28:29.165 ] 00:28:29.165 } 00:28:29.165 [2024-04-27 00:48:02.433160] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:29.165 [2024-04-27 00:48:02.434735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141652 ] 00:28:29.165 [2024-04-27 00:48:02.609139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.425 [2024-04-27 00:48:02.800647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.059  Copying: 56/56 [kB] (average 27 MBps) 00:28:31.059 00:28:31.059 00:48:04 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:31.059 00:48:04 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:28:31.059 00:48:04 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:31.059 00:48:04 -- dd/common.sh@11 -- # local nvme_ref= 00:28:31.059 00:48:04 -- dd/common.sh@12 -- # local size=57344 00:28:31.059 00:48:04 -- dd/common.sh@14 -- # local bs=1048576 00:28:31.059 00:48:04 -- dd/common.sh@15 -- # local count=1 00:28:31.059 00:48:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:31.059 00:48:04 -- dd/common.sh@18 -- # gen_conf 00:28:31.059 00:48:04 -- dd/common.sh@31 -- # xtrace_disable 00:28:31.059 00:48:04 -- common/autotest_common.sh@10 -- # set +x 00:28:31.059 { 00:28:31.059 "subsystems": [ 00:28:31.059 { 00:28:31.059 "subsystem": "bdev", 00:28:31.059 "config": [ 00:28:31.059 { 00:28:31.059 "params": { 00:28:31.059 "trtype": "pcie", 00:28:31.059 "traddr": "0000:00:10.0", 00:28:31.059 "name": "Nvme0" 00:28:31.059 }, 00:28:31.059 "method": "bdev_nvme_attach_controller" 00:28:31.059 }, 00:28:31.059 { 00:28:31.059 "method": "bdev_wait_for_examine" 00:28:31.059 } 00:28:31.059 ] 00:28:31.059 } 00:28:31.059 ] 00:28:31.059 } 00:28:31.059 [2024-04-27 00:48:04.306313] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:31.059 [2024-04-27 00:48:04.306731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141680 ] 00:28:31.059 [2024-04-27 00:48:04.474026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.318 [2024-04-27 00:48:04.675663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.513  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:32.513 00:28:32.513 00:48:05 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:32.513 00:48:05 -- dd/basic_rw.sh@23 -- # count=7 00:28:32.513 00:48:05 -- dd/basic_rw.sh@24 -- # count=7 00:28:32.513 00:48:05 -- dd/basic_rw.sh@25 -- # size=57344 00:28:32.513 00:48:05 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:28:32.513 00:48:05 -- dd/common.sh@98 -- # xtrace_disable 00:28:32.513 00:48:05 -- common/autotest_common.sh@10 -- # set +x 00:28:33.080 00:48:06 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:28:33.080 00:48:06 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:33.080 00:48:06 -- dd/common.sh@31 -- # xtrace_disable 00:28:33.080 00:48:06 -- common/autotest_common.sh@10 -- # set +x 00:28:33.080 { 00:28:33.080 "subsystems": [ 00:28:33.080 { 00:28:33.080 "subsystem": "bdev", 00:28:33.080 "config": [ 00:28:33.080 { 00:28:33.080 "params": { 00:28:33.080 "trtype": "pcie", 00:28:33.080 "traddr": "0000:00:10.0", 00:28:33.080 "name": "Nvme0" 00:28:33.080 }, 00:28:33.080 "method": "bdev_nvme_attach_controller" 00:28:33.080 }, 00:28:33.080 { 00:28:33.080 "method": "bdev_wait_for_examine" 00:28:33.080 } 00:28:33.080 ] 00:28:33.080 } 00:28:33.080 ] 00:28:33.080 } 00:28:33.080 [2024-04-27 00:48:06.571337] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:33.080 [2024-04-27 00:48:06.571702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141712 ] 00:28:33.340 [2024-04-27 00:48:06.737649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.600 [2024-04-27 00:48:06.933161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.795  Copying: 56/56 [kB] (average 54 MBps) 00:28:34.796 00:28:34.796 00:48:08 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:28:34.796 00:48:08 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:34.796 00:48:08 -- dd/common.sh@31 -- # xtrace_disable 00:28:34.796 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:28:34.796 { 00:28:34.796 "subsystems": [ 00:28:34.796 { 00:28:34.796 "subsystem": "bdev", 00:28:34.796 "config": [ 00:28:34.796 { 00:28:34.796 "params": { 00:28:34.796 "trtype": "pcie", 00:28:34.796 "traddr": "0000:00:10.0", 00:28:34.796 "name": "Nvme0" 00:28:34.796 }, 00:28:34.796 "method": "bdev_nvme_attach_controller" 00:28:34.796 }, 00:28:34.796 { 00:28:34.796 "method": "bdev_wait_for_examine" 00:28:34.796 } 00:28:34.796 ] 00:28:34.796 } 00:28:34.796 ] 00:28:34.796 } 00:28:35.055 [2024-04-27 00:48:08.391559] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:35.055 [2024-04-27 00:48:08.392546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141743 ] 00:28:35.055 [2024-04-27 00:48:08.560947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.313 [2024-04-27 00:48:08.768718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.506  Copying: 56/56 [kB] (average 54 MBps) 00:28:36.507 00:28:36.766 00:48:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:36.766 00:48:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:28:36.766 00:48:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:36.766 00:48:10 -- dd/common.sh@11 -- # local nvme_ref= 00:28:36.766 00:48:10 -- dd/common.sh@12 -- # local size=57344 00:28:36.766 00:48:10 -- dd/common.sh@14 -- # local bs=1048576 00:28:36.766 00:48:10 -- dd/common.sh@15 -- # local count=1 00:28:36.766 00:48:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:36.766 00:48:10 -- dd/common.sh@18 -- # gen_conf 00:28:36.766 00:48:10 -- dd/common.sh@31 -- # xtrace_disable 00:28:36.766 00:48:10 -- common/autotest_common.sh@10 -- # set +x 00:28:36.766 { 00:28:36.766 "subsystems": [ 00:28:36.766 { 00:28:36.766 "subsystem": "bdev", 00:28:36.766 "config": [ 00:28:36.766 { 00:28:36.766 "params": { 00:28:36.766 "trtype": "pcie", 00:28:36.766 "traddr": "0000:00:10.0", 00:28:36.766 "name": "Nvme0" 00:28:36.766 }, 00:28:36.766 "method": "bdev_nvme_attach_controller" 00:28:36.766 }, 00:28:36.766 { 00:28:36.766 "method": "bdev_wait_for_examine" 00:28:36.766 } 00:28:36.766 ] 00:28:36.766 } 00:28:36.766 ] 00:28:36.766 } 00:28:36.766 [2024-04-27 00:48:10.165721] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:36.766 [2024-04-27 00:48:10.166073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141774 ] 00:28:36.766 [2024-04-27 00:48:10.333591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.025 [2024-04-27 00:48:10.506422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.660  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:38.660 00:28:38.660 00:48:11 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:28:38.660 00:48:11 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:38.660 00:48:11 -- dd/basic_rw.sh@23 -- # count=3 00:28:38.660 00:48:11 -- dd/basic_rw.sh@24 -- # count=3 00:28:38.660 00:48:11 -- dd/basic_rw.sh@25 -- # size=49152 00:28:38.660 00:48:11 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:28:38.660 00:48:11 -- dd/common.sh@98 -- # xtrace_disable 00:28:38.660 00:48:11 -- common/autotest_common.sh@10 -- # set +x 00:28:38.919 00:48:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:28:38.919 00:48:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:38.919 00:48:12 -- dd/common.sh@31 -- # xtrace_disable 00:28:38.919 00:48:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.919 { 00:28:38.919 "subsystems": [ 00:28:38.919 { 00:28:38.919 "subsystem": "bdev", 00:28:38.919 "config": [ 00:28:38.919 { 00:28:38.919 "params": { 00:28:38.919 "trtype": "pcie", 00:28:38.919 "traddr": "0000:00:10.0", 00:28:38.919 "name": "Nvme0" 00:28:38.919 }, 00:28:38.919 "method": "bdev_nvme_attach_controller" 00:28:38.919 }, 00:28:38.919 { 00:28:38.919 "method": "bdev_wait_for_examine" 00:28:38.919 } 00:28:38.919 ] 00:28:38.919 } 00:28:38.919 ] 00:28:38.919 } 00:28:38.919 [2024-04-27 00:48:12.458518] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:38.919 [2024-04-27 00:48:12.459408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141805 ] 00:28:39.178 [2024-04-27 00:48:12.629798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.437 [2024-04-27 00:48:12.837160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.072  Copying: 48/48 [kB] (average 46 MBps) 00:28:41.072 00:28:41.072 00:48:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:28:41.072 00:48:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:41.072 00:48:14 -- dd/common.sh@31 -- # xtrace_disable 00:28:41.072 00:48:14 -- common/autotest_common.sh@10 -- # set +x 00:28:41.072 { 00:28:41.072 "subsystems": [ 00:28:41.072 { 00:28:41.072 "subsystem": "bdev", 00:28:41.072 "config": [ 00:28:41.072 { 00:28:41.072 "params": { 00:28:41.072 "trtype": "pcie", 00:28:41.072 "traddr": "0000:00:10.0", 00:28:41.072 "name": "Nvme0" 00:28:41.072 }, 00:28:41.072 "method": "bdev_nvme_attach_controller" 00:28:41.072 }, 00:28:41.072 { 00:28:41.072 "method": "bdev_wait_for_examine" 00:28:41.072 } 00:28:41.072 ] 00:28:41.072 } 00:28:41.072 ] 00:28:41.072 } 00:28:41.072 [2024-04-27 00:48:14.372623] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:41.072 [2024-04-27 00:48:14.372985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141833 ] 00:28:41.072 [2024-04-27 00:48:14.544506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.331 [2024-04-27 00:48:14.794441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.832  Copying: 48/48 [kB] (average 46 MBps) 00:28:42.832 00:28:42.832 00:48:16 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:42.832 00:48:16 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:28:42.832 00:48:16 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:42.832 00:48:16 -- dd/common.sh@11 -- # local nvme_ref= 00:28:42.832 00:48:16 -- dd/common.sh@12 -- # local size=49152 00:28:42.832 00:48:16 -- dd/common.sh@14 -- # local bs=1048576 00:28:42.832 00:48:16 -- dd/common.sh@15 -- # local count=1 00:28:42.832 00:48:16 -- dd/common.sh@18 -- # gen_conf 00:28:42.832 00:48:16 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:42.832 00:48:16 -- dd/common.sh@31 -- # xtrace_disable 00:28:42.832 00:48:16 -- common/autotest_common.sh@10 -- # set +x 00:28:42.832 { 00:28:42.832 "subsystems": [ 00:28:42.832 { 00:28:42.832 "subsystem": "bdev", 00:28:42.832 "config": [ 00:28:42.832 { 00:28:42.832 "params": { 00:28:42.832 "trtype": "pcie", 00:28:42.832 "traddr": "0000:00:10.0", 00:28:42.832 "name": "Nvme0" 00:28:42.832 }, 00:28:42.832 "method": "bdev_nvme_attach_controller" 00:28:42.832 }, 00:28:42.832 { 00:28:42.832 "method": "bdev_wait_for_examine" 00:28:42.832 } 00:28:42.832 ] 00:28:42.832 } 00:28:42.832 ] 00:28:42.832 } 00:28:42.832 [2024-04-27 00:48:16.340162] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:42.832 [2024-04-27 00:48:16.340515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141865 ] 00:28:43.090 [2024-04-27 00:48:16.511599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.348 [2024-04-27 00:48:16.717630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.566  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:44.566 00:28:44.566 00:48:18 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:28:44.566 00:48:18 -- dd/basic_rw.sh@23 -- # count=3 00:28:44.566 00:48:18 -- dd/basic_rw.sh@24 -- # count=3 00:28:44.567 00:48:18 -- dd/basic_rw.sh@25 -- # size=49152 00:28:44.567 00:48:18 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:28:44.567 00:48:18 -- dd/common.sh@98 -- # xtrace_disable 00:28:44.567 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:28:45.133 00:48:18 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:28:45.133 00:48:18 -- dd/basic_rw.sh@30 -- # gen_conf 00:28:45.133 00:48:18 -- dd/common.sh@31 -- # xtrace_disable 00:28:45.133 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:28:45.133 [2024-04-27 00:48:18.625919] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:45.133 [2024-04-27 00:48:18.626883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141893 ] 00:28:45.133 { 00:28:45.133 "subsystems": [ 00:28:45.133 { 00:28:45.133 "subsystem": "bdev", 00:28:45.133 "config": [ 00:28:45.133 { 00:28:45.133 "params": { 00:28:45.133 "trtype": "pcie", 00:28:45.133 "traddr": "0000:00:10.0", 00:28:45.133 "name": "Nvme0" 00:28:45.133 }, 00:28:45.133 "method": "bdev_nvme_attach_controller" 00:28:45.133 }, 00:28:45.133 { 00:28:45.133 "method": "bdev_wait_for_examine" 00:28:45.133 } 00:28:45.133 ] 00:28:45.133 } 00:28:45.133 ] 00:28:45.133 } 00:28:45.391 [2024-04-27 00:48:18.787578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.651 [2024-04-27 00:48:18.994485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.287  Copying: 48/48 [kB] (average 46 MBps) 00:28:47.287 00:28:47.287 00:48:20 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:28:47.287 00:48:20 -- dd/basic_rw.sh@37 -- # gen_conf 00:28:47.287 00:48:20 -- dd/common.sh@31 -- # xtrace_disable 00:28:47.287 00:48:20 -- common/autotest_common.sh@10 -- # set +x 00:28:47.287 { 00:28:47.287 "subsystems": [ 00:28:47.287 { 00:28:47.287 "subsystem": "bdev", 00:28:47.287 "config": [ 00:28:47.287 { 00:28:47.287 "params": { 00:28:47.287 "trtype": "pcie", 00:28:47.287 "traddr": "0000:00:10.0", 00:28:47.287 "name": "Nvme0" 00:28:47.287 }, 00:28:47.287 "method": "bdev_nvme_attach_controller" 00:28:47.287 }, 00:28:47.287 { 00:28:47.287 "method": "bdev_wait_for_examine" 00:28:47.287 } 00:28:47.287 ] 00:28:47.287 } 00:28:47.287 ] 00:28:47.287 } 00:28:47.287 [2024-04-27 00:48:20.522076] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:47.287 [2024-04-27 00:48:20.522474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141927 ] 00:28:47.287 [2024-04-27 00:48:20.690174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.547 [2024-04-27 00:48:20.900840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.742  Copying: 48/48 [kB] (average 46 MBps) 00:28:48.742 00:28:48.742 00:48:22 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:48.742 00:48:22 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:28:48.742 00:48:22 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:48.742 00:48:22 -- dd/common.sh@11 -- # local nvme_ref= 00:28:48.742 00:48:22 -- dd/common.sh@12 -- # local size=49152 00:28:48.742 00:48:22 -- dd/common.sh@14 -- # local bs=1048576 00:28:48.742 00:48:22 -- dd/common.sh@15 -- # local count=1 00:28:48.742 00:48:22 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:48.742 00:48:22 -- dd/common.sh@18 -- # gen_conf 00:28:48.742 00:48:22 -- dd/common.sh@31 -- # xtrace_disable 00:28:48.742 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:28:49.001 { 00:28:49.001 "subsystems": [ 00:28:49.001 { 00:28:49.001 "subsystem": "bdev", 00:28:49.002 "config": [ 00:28:49.002 { 00:28:49.002 "params": { 00:28:49.002 "trtype": "pcie", 00:28:49.002 "traddr": "0000:00:10.0", 00:28:49.002 "name": "Nvme0" 00:28:49.002 }, 00:28:49.002 "method": "bdev_nvme_attach_controller" 00:28:49.002 }, 00:28:49.002 { 00:28:49.002 "method": "bdev_wait_for_examine" 00:28:49.002 } 00:28:49.002 ] 00:28:49.002 } 00:28:49.002 ] 00:28:49.002 } 00:28:49.002 [2024-04-27 00:48:22.376420] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:49.002 [2024-04-27 00:48:22.376879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141951 ] 00:28:49.002 [2024-04-27 00:48:22.540571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.260 [2024-04-27 00:48:22.739940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.799  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:50.799 00:28:50.799 ************************************ 00:28:50.799 END TEST dd_rw 00:28:50.799 ************************************ 00:28:50.799 00:28:50.799 real 0m36.134s 00:28:50.799 user 0m30.065s 00:28:50.799 sys 0m4.735s 00:28:50.799 00:48:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:50.799 00:48:24 -- common/autotest_common.sh@10 -- # set +x 00:28:50.799 00:48:24 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:28:50.799 00:48:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:50.799 00:48:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:50.799 00:48:24 -- common/autotest_common.sh@10 -- # set +x 00:28:50.799 ************************************ 00:28:50.799 START TEST dd_rw_offset 00:28:50.799 ************************************ 00:28:50.799 00:48:24 -- common/autotest_common.sh@1111 -- # basic_offset 00:28:50.799 00:48:24 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:28:50.799 00:48:24 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:28:50.799 00:48:24 -- dd/common.sh@98 -- # xtrace_disable 00:28:50.799 00:48:24 -- common/autotest_common.sh@10 -- # set +x 00:28:50.799 00:48:24 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:28:50.799 00:48:24 -- dd/basic_rw.sh@56 -- # data=o57amgeziu3s92owpaok1knz4h7c8lyh3xi55z3vctjw1814ne53n3umigdbkpqze1b1ik9posvpfsnckipamjxg1ekg3e1oq9as7js7f43l149oibcypo5mgd2viuwv4m1zeee9d9c2cw1oxnne7mc5wycq6ebnopnf6jr9c4rxop965z24zpkbubyae46yvtjt5gco809jxcyhtmiiap46bef435l199blvbopvv3te5gnxgdmnqurwpiynzlnetkj7h7s7090x4tqexj5m3kxwk3f1lykrz7u8f6sxpcp73sbkao7pbfptv4ev9zm38ffgt5fryyla5h84gakh7i3xq9h5ypx3i91vu2atpf3iujxc0yyq3y5kvltgnaky5oxbe2uul4xlag15zy0yw8ctbmndwgdpml02abf4xciw67g3tktfsszbaheffiw0lucavhwsx2hsc2kw78u0g60nvq8jfwkk9rtqwh8t8kjh1fd15icz0wacxtezg40qgx0nv3ybg9gukjclkzlr9vopogf6rkh53hsfqhgqtoj25aoarormd2iygnkfypcyfx9yb8fxxlvgo8ciiodybrfp6jgu1vmbdnh5t3t2b4d9wmtuoof6qkkjho7vwx5fjwgm5y0wt1dugdy1gsz4uso567wzhkmto8vvyf3d9op7zzvihzsxep7ppso00kkzi9upwln4zmcwf1om0uqpwaiayrdw2x9oz2p6n2wcmbfebt3kbvfxm0s5ek189saoigwhag0wtvzmeag4dvgr1zrpd75hupulpuuse0oruldkwx8jwmijdtwmh6anmktc5bxwbwa6i3i6nhphatxvnl5865tovj6yvt3klftmxxmu5u86niryembque4u7iceu32npu738wkdy8uxlk9iqz9guqtrr6mag82xgngmt2vhwvsxv512yo1x49u8hph35ekaez3sx6dlv29m01w2ilfbbctb4a8qzj5nab7w5j5p1x4eqo7sw3icgnx573z8gyl5iripr6cjulz8enmyym7bzzjikr7fbcp2pbflr0bzqezegippbrcgs45f1gmcu08jhanakhezml8nllxyg9fr4acs3p2lxhq06bd4qisazcmq6we33xpwcu4sws9weq9tqa1m6zdki4nefd2apwgyl9nkosgoyo3ijtliqga1pzg0ruewsw615m8t65agaqn8oarnca0iifxagl8zgm226r1782njpo4qrhpbh01n4nak4ha90zojp3i8q6e2oiq5rjychcg56m26agwcqkhjuh7uv8s3n25znrm7m32mxofnuegqkj94zg7u365cp9z0mjcu59wkjl1houp6bikfjlbdsvz7weu645sy0atv83b8cbrkbni3vmna7rqui4exsq5u3eqfenjyzxlrvw6hq1m1r0kki22dcw7l831kwkm9oey8vzhy734rk0n3zp885xh2tl0nqt210i158km4g2r6kszjuuk8gac2fjmipozmkunaer58vwna3newklif9g1th2ceyu6w4easrn4pnkkt2ugufz3e8dnf3egx6q8h4sxbmsjml5d3lnhct4i2oyb635t6urhd9nq742bhjxd35k21xbzk14311wbymc3rtu5desy26xtu0moe1851wjwplgcdd8fcaxrowkz0zyafjokkdxtzzfegcm93yxu92dy84h0v8m2qe96meaac5iv01q42na3aa4jv3vdwns159y1b9eczwmqyedllsvc4dmwwarsi4ii30zeyqiac3eai0zn5wl0kes1046la2ol6rhoo57ck6p68fpzkc9w6vp8xj6dswtn96m3otj2dmoj3mdasyzs97okyko2zq32hsequgfybeo239b7vjnmhdl448dlg886xr1mvmjpc3kxf5neyzkzdx6fy7jrqb3zmcqsuxhmzm7j23exqxo3180x2r1vulh1t2g7w9fuawbrx654f5fdrxrpk5oz0kihnjolzt7s4hhlpcxfc357hqyomu9ispqm61giitixeutg6u7vlg802i6bz8invw5uhx37830pxyg509wnky1i4pt6up5fwza5p0tiu85ii8zosmtonhfsz61jcb9t3ch9mfm31l5rub0pn0btz9af38jercvpsjbbtc3ui2tk2jqdoxepkx82qt72qon7cew368ogmpdcn9iocxcwy66dv8fh7cahhyxo0ja0wea97veilozbt5c85hw4e90nbwkr81jdw2vlfarazj7001o2cs00vcq4ope5npu0q9prm0y3qbtczajhhatqjq2yil547a0iohx6jem5lpr4st3dbnhlp9v85q3pc5mwb3qjxsybadv2w1nok4uxr1ph0hhbqzrvkhwcxb0gl22kzg00t2n7negve2fxf34k1hgdzdmx7gousez5sjwvnvysiod2m5vo9kckdxk8gtokpg46vkgo0equ5x8vjr2ku7vz3h2zfwaevdtkmof28xskuqhxpe79rk6sf302kix9jq9enaavr6yyxydqe18v5noclfc34vplgk4xv3izkigwkjko7qzpavzt8qd5t7hhm64s3axthiwwpmeg5jslt3r2gn9x6gaw5ls64ls71k6p4y306zxzuvdewk47k293c34bylabd7rsgckpn7r2i0z6b9n1a1v1q6dmys01khc38zn8olhz4kko79wsv2vzkrxt40j4kascsniw11rys6ayn0e24312x6t8kq5mskgxtqu50p3ffs7lbamnl70elk7xsrm6zvv4bc89ac5xae2v72a3mn087i4mq25453oy2wkp02wche427mz2wxc8g95v38tsjc69kvie3tmjml4tx1vx1vkvvhhbygtu8agpfv4el3y2hqw239eqjs85wsf4rzg49h2hs3v6fz7z9gos9lrnj5ef6wihubvfj331zto8l6cuqlrcwd5qa3esttu66g18yeie8zm1bg1icvtts5x2dtf36j29oqwrodr0ui4utxu5bejre058cqjp637qjz4af7c9vzxfwbw6h2m7krq143ykwbuouh865hmtpwpf7tyh0ekdyki6w8akaxgadzvu525y8msb4jghuavb2u947ybwirkp0vpx5h44fq7ix7wd4y6me7alzkmpk342na9urq6e4cmg9zwcbqsv4z414a3zrxe0ftlfp8v4x14q9bv3ckmpvp7oie0ditve8oys8y96hyoc8cig6wxoortswxph8j7f4ohrddrlz9dfk4oitc6wcjoyh56j97rzjdwu7ej8uijgltjos2uqixoijl5xi0891ikdp4vath8j0zcvqba3nfhjxtnhb5v25rk67tiiq97zbi1tcev6gcoed325jmzgqudx7mvwmkpeipqzrtjiltb8x2w84gw2op76nu340da951jjpqzbnvs04mogfnqai6tnin5liwlw9aoh8uowhg8a8n83d01b6x660nwyiqyypeysxzw4jyjvlppztjky9i3ih521z19zcb8j3kfl1wcdpzb9eb3zftdrtqguxm8r68n3xurgyd5qw9sp439a9ecg4wiv6yizcgmyqsloxrwudvs5w50txwjbtcpo1ph5mnt4jh0b9aj8t7e3eav0mzhx00k8l7rhamssq1h7az24ayvqs5z7y709gukm4tokjwsfzvixsl2itcysknjpgowxym1c0u9qu6vaxg8lh645ezs79ubztrhusfqzvjx7hik3kw8x9klau13jwfpbz98rj8bqgvxodkt15gyyaj5tw90ttc7go5t6kkh4mlar3qakck04gh3zq41m0fpgt4fqouy0qaah4mic85p883oz1dos80yao0mmabrr78ye71ap77onf7edqkqioyz59v7e4lvhr2vw68ih9obxgzu2t3394hdz647063is2y0sw7c1rq3g4smlok5j7wp2prticxo7wpip22opwzf5e0hcy8ni3fnrbbexqf6vmffenqkj09hphvzc653bepyl87ytqzs5qrpy5gnp9qt23lq040xa3jr9gk6zryz07xgiib62 00:28:50.799 00:48:24 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:28:50.799 00:48:24 -- dd/basic_rw.sh@59 -- # gen_conf 00:28:50.799 00:48:24 -- dd/common.sh@31 -- # xtrace_disable 00:28:50.799 00:48:24 -- common/autotest_common.sh@10 -- # set +x 00:28:51.058 { 00:28:51.058 "subsystems": [ 00:28:51.058 { 00:28:51.058 "subsystem": "bdev", 00:28:51.058 "config": [ 00:28:51.058 { 00:28:51.058 "params": { 00:28:51.058 "trtype": "pcie", 00:28:51.058 "traddr": "0000:00:10.0", 00:28:51.058 "name": "Nvme0" 00:28:51.058 }, 00:28:51.058 "method": "bdev_nvme_attach_controller" 00:28:51.058 }, 00:28:51.058 { 00:28:51.058 "method": "bdev_wait_for_examine" 00:28:51.058 } 00:28:51.058 ] 00:28:51.058 } 00:28:51.058 ] 00:28:51.058 } 00:28:51.058 [2024-04-27 00:48:24.428220] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:51.058 [2024-04-27 00:48:24.428436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142014 ] 00:28:51.058 [2024-04-27 00:48:24.597297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.316 [2024-04-27 00:48:24.796125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.815  Copying: 4096/4096 [B] (average 4000 kBps) 00:28:52.815 00:28:52.815 00:48:26 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:28:52.815 00:48:26 -- dd/basic_rw.sh@65 -- # gen_conf 00:28:52.815 00:48:26 -- dd/common.sh@31 -- # xtrace_disable 00:28:52.815 00:48:26 -- common/autotest_common.sh@10 -- # set +x 00:28:52.815 { 00:28:52.815 "subsystems": [ 00:28:52.815 { 00:28:52.815 "subsystem": "bdev", 00:28:52.815 "config": [ 00:28:52.815 { 00:28:52.815 "params": { 00:28:52.815 "trtype": "pcie", 00:28:52.815 "traddr": "0000:00:10.0", 00:28:52.815 "name": "Nvme0" 00:28:52.815 }, 00:28:52.815 "method": "bdev_nvme_attach_controller" 00:28:52.815 }, 00:28:52.815 { 00:28:52.815 "method": "bdev_wait_for_examine" 00:28:52.815 } 00:28:52.815 ] 00:28:52.815 } 00:28:52.815 ] 00:28:52.815 } 00:28:52.815 [2024-04-27 00:48:26.222435] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:52.815 [2024-04-27 00:48:26.222624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142035 ] 00:28:52.815 [2024-04-27 00:48:26.396234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.074 [2024-04-27 00:48:26.623528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.579  Copying: 4096/4096 [B] (average 4000 kBps) 00:28:54.579 00:28:54.579 00:48:28 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:28:54.579 00:48:28 -- dd/basic_rw.sh@72 -- # [[ o57amgeziu3s92owpaok1knz4h7c8lyh3xi55z3vctjw1814ne53n3umigdbkpqze1b1ik9posvpfsnckipamjxg1ekg3e1oq9as7js7f43l149oibcypo5mgd2viuwv4m1zeee9d9c2cw1oxnne7mc5wycq6ebnopnf6jr9c4rxop965z24zpkbubyae46yvtjt5gco809jxcyhtmiiap46bef435l199blvbopvv3te5gnxgdmnqurwpiynzlnetkj7h7s7090x4tqexj5m3kxwk3f1lykrz7u8f6sxpcp73sbkao7pbfptv4ev9zm38ffgt5fryyla5h84gakh7i3xq9h5ypx3i91vu2atpf3iujxc0yyq3y5kvltgnaky5oxbe2uul4xlag15zy0yw8ctbmndwgdpml02abf4xciw67g3tktfsszbaheffiw0lucavhwsx2hsc2kw78u0g60nvq8jfwkk9rtqwh8t8kjh1fd15icz0wacxtezg40qgx0nv3ybg9gukjclkzlr9vopogf6rkh53hsfqhgqtoj25aoarormd2iygnkfypcyfx9yb8fxxlvgo8ciiodybrfp6jgu1vmbdnh5t3t2b4d9wmtuoof6qkkjho7vwx5fjwgm5y0wt1dugdy1gsz4uso567wzhkmto8vvyf3d9op7zzvihzsxep7ppso00kkzi9upwln4zmcwf1om0uqpwaiayrdw2x9oz2p6n2wcmbfebt3kbvfxm0s5ek189saoigwhag0wtvzmeag4dvgr1zrpd75hupulpuuse0oruldkwx8jwmijdtwmh6anmktc5bxwbwa6i3i6nhphatxvnl5865tovj6yvt3klftmxxmu5u86niryembque4u7iceu32npu738wkdy8uxlk9iqz9guqtrr6mag82xgngmt2vhwvsxv512yo1x49u8hph35ekaez3sx6dlv29m01w2ilfbbctb4a8qzj5nab7w5j5p1x4eqo7sw3icgnx573z8gyl5iripr6cjulz8enmyym7bzzjikr7fbcp2pbflr0bzqezegippbrcgs45f1gmcu08jhanakhezml8nllxyg9fr4acs3p2lxhq06bd4qisazcmq6we33xpwcu4sws9weq9tqa1m6zdki4nefd2apwgyl9nkosgoyo3ijtliqga1pzg0ruewsw615m8t65agaqn8oarnca0iifxagl8zgm226r1782njpo4qrhpbh01n4nak4ha90zojp3i8q6e2oiq5rjychcg56m26agwcqkhjuh7uv8s3n25znrm7m32mxofnuegqkj94zg7u365cp9z0mjcu59wkjl1houp6bikfjlbdsvz7weu645sy0atv83b8cbrkbni3vmna7rqui4exsq5u3eqfenjyzxlrvw6hq1m1r0kki22dcw7l831kwkm9oey8vzhy734rk0n3zp885xh2tl0nqt210i158km4g2r6kszjuuk8gac2fjmipozmkunaer58vwna3newklif9g1th2ceyu6w4easrn4pnkkt2ugufz3e8dnf3egx6q8h4sxbmsjml5d3lnhct4i2oyb635t6urhd9nq742bhjxd35k21xbzk14311wbymc3rtu5desy26xtu0moe1851wjwplgcdd8fcaxrowkz0zyafjokkdxtzzfegcm93yxu92dy84h0v8m2qe96meaac5iv01q42na3aa4jv3vdwns159y1b9eczwmqyedllsvc4dmwwarsi4ii30zeyqiac3eai0zn5wl0kes1046la2ol6rhoo57ck6p68fpzkc9w6vp8xj6dswtn96m3otj2dmoj3mdasyzs97okyko2zq32hsequgfybeo239b7vjnmhdl448dlg886xr1mvmjpc3kxf5neyzkzdx6fy7jrqb3zmcqsuxhmzm7j23exqxo3180x2r1vulh1t2g7w9fuawbrx654f5fdrxrpk5oz0kihnjolzt7s4hhlpcxfc357hqyomu9ispqm61giitixeutg6u7vlg802i6bz8invw5uhx37830pxyg509wnky1i4pt6up5fwza5p0tiu85ii8zosmtonhfsz61jcb9t3ch9mfm31l5rub0pn0btz9af38jercvpsjbbtc3ui2tk2jqdoxepkx82qt72qon7cew368ogmpdcn9iocxcwy66dv8fh7cahhyxo0ja0wea97veilozbt5c85hw4e90nbwkr81jdw2vlfarazj7001o2cs00vcq4ope5npu0q9prm0y3qbtczajhhatqjq2yil547a0iohx6jem5lpr4st3dbnhlp9v85q3pc5mwb3qjxsybadv2w1nok4uxr1ph0hhbqzrvkhwcxb0gl22kzg00t2n7negve2fxf34k1hgdzdmx7gousez5sjwvnvysiod2m5vo9kckdxk8gtokpg46vkgo0equ5x8vjr2ku7vz3h2zfwaevdtkmof28xskuqhxpe79rk6sf302kix9jq9enaavr6yyxydqe18v5noclfc34vplgk4xv3izkigwkjko7qzpavzt8qd5t7hhm64s3axthiwwpmeg5jslt3r2gn9x6gaw5ls64ls71k6p4y306zxzuvdewk47k293c34bylabd7rsgckpn7r2i0z6b9n1a1v1q6dmys01khc38zn8olhz4kko79wsv2vzkrxt40j4kascsniw11rys6ayn0e24312x6t8kq5mskgxtqu50p3ffs7lbamnl70elk7xsrm6zvv4bc89ac5xae2v72a3mn087i4mq25453oy2wkp02wche427mz2wxc8g95v38tsjc69kvie3tmjml4tx1vx1vkvvhhbygtu8agpfv4el3y2hqw239eqjs85wsf4rzg49h2hs3v6fz7z9gos9lrnj5ef6wihubvfj331zto8l6cuqlrcwd5qa3esttu66g18yeie8zm1bg1icvtts5x2dtf36j29oqwrodr0ui4utxu5bejre058cqjp637qjz4af7c9vzxfwbw6h2m7krq143ykwbuouh865hmtpwpf7tyh0ekdyki6w8akaxgadzvu525y8msb4jghuavb2u947ybwirkp0vpx5h44fq7ix7wd4y6me7alzkmpk342na9urq6e4cmg9zwcbqsv4z414a3zrxe0ftlfp8v4x14q9bv3ckmpvp7oie0ditve8oys8y96hyoc8cig6wxoortswxph8j7f4ohrddrlz9dfk4oitc6wcjoyh56j97rzjdwu7ej8uijgltjos2uqixoijl5xi0891ikdp4vath8j0zcvqba3nfhjxtnhb5v25rk67tiiq97zbi1tcev6gcoed325jmzgqudx7mvwmkpeipqzrtjiltb8x2w84gw2op76nu340da951jjpqzbnvs04mogfnqai6tnin5liwlw9aoh8uowhg8a8n83d01b6x660nwyiqyypeysxzw4jyjvlppztjky9i3ih521z19zcb8j3kfl1wcdpzb9eb3zftdrtqguxm8r68n3xurgyd5qw9sp439a9ecg4wiv6yizcgmyqsloxrwudvs5w50txwjbtcpo1ph5mnt4jh0b9aj8t7e3eav0mzhx00k8l7rhamssq1h7az24ayvqs5z7y709gukm4tokjwsfzvixsl2itcysknjpgowxym1c0u9qu6vaxg8lh645ezs79ubztrhusfqzvjx7hik3kw8x9klau13jwfpbz98rj8bqgvxodkt15gyyaj5tw90ttc7go5t6kkh4mlar3qakck04gh3zq41m0fpgt4fqouy0qaah4mic85p883oz1dos80yao0mmabrr78ye71ap77onf7edqkqioyz59v7e4lvhr2vw68ih9obxgzu2t3394hdz647063is2y0sw7c1rq3g4smlok5j7wp2prticxo7wpip22opwzf5e0hcy8ni3fnrbbexqf6vmffenqkj09hphvzc653bepyl87ytqzs5qrpy5gnp9qt23lq040xa3jr9gk6zryz07xgiib62 == \o\5\7\a\m\g\e\z\i\u\3\s\9\2\o\w\p\a\o\k\1\k\n\z\4\h\7\c\8\l\y\h\3\x\i\5\5\z\3\v\c\t\j\w\1\8\1\4\n\e\5\3\n\3\u\m\i\g\d\b\k\p\q\z\e\1\b\1\i\k\9\p\o\s\v\p\f\s\n\c\k\i\p\a\m\j\x\g\1\e\k\g\3\e\1\o\q\9\a\s\7\j\s\7\f\4\3\l\1\4\9\o\i\b\c\y\p\o\5\m\g\d\2\v\i\u\w\v\4\m\1\z\e\e\e\9\d\9\c\2\c\w\1\o\x\n\n\e\7\m\c\5\w\y\c\q\6\e\b\n\o\p\n\f\6\j\r\9\c\4\r\x\o\p\9\6\5\z\2\4\z\p\k\b\u\b\y\a\e\4\6\y\v\t\j\t\5\g\c\o\8\0\9\j\x\c\y\h\t\m\i\i\a\p\4\6\b\e\f\4\3\5\l\1\9\9\b\l\v\b\o\p\v\v\3\t\e\5\g\n\x\g\d\m\n\q\u\r\w\p\i\y\n\z\l\n\e\t\k\j\7\h\7\s\7\0\9\0\x\4\t\q\e\x\j\5\m\3\k\x\w\k\3\f\1\l\y\k\r\z\7\u\8\f\6\s\x\p\c\p\7\3\s\b\k\a\o\7\p\b\f\p\t\v\4\e\v\9\z\m\3\8\f\f\g\t\5\f\r\y\y\l\a\5\h\8\4\g\a\k\h\7\i\3\x\q\9\h\5\y\p\x\3\i\9\1\v\u\2\a\t\p\f\3\i\u\j\x\c\0\y\y\q\3\y\5\k\v\l\t\g\n\a\k\y\5\o\x\b\e\2\u\u\l\4\x\l\a\g\1\5\z\y\0\y\w\8\c\t\b\m\n\d\w\g\d\p\m\l\0\2\a\b\f\4\x\c\i\w\6\7\g\3\t\k\t\f\s\s\z\b\a\h\e\f\f\i\w\0\l\u\c\a\v\h\w\s\x\2\h\s\c\2\k\w\7\8\u\0\g\6\0\n\v\q\8\j\f\w\k\k\9\r\t\q\w\h\8\t\8\k\j\h\1\f\d\1\5\i\c\z\0\w\a\c\x\t\e\z\g\4\0\q\g\x\0\n\v\3\y\b\g\9\g\u\k\j\c\l\k\z\l\r\9\v\o\p\o\g\f\6\r\k\h\5\3\h\s\f\q\h\g\q\t\o\j\2\5\a\o\a\r\o\r\m\d\2\i\y\g\n\k\f\y\p\c\y\f\x\9\y\b\8\f\x\x\l\v\g\o\8\c\i\i\o\d\y\b\r\f\p\6\j\g\u\1\v\m\b\d\n\h\5\t\3\t\2\b\4\d\9\w\m\t\u\o\o\f\6\q\k\k\j\h\o\7\v\w\x\5\f\j\w\g\m\5\y\0\w\t\1\d\u\g\d\y\1\g\s\z\4\u\s\o\5\6\7\w\z\h\k\m\t\o\8\v\v\y\f\3\d\9\o\p\7\z\z\v\i\h\z\s\x\e\p\7\p\p\s\o\0\0\k\k\z\i\9\u\p\w\l\n\4\z\m\c\w\f\1\o\m\0\u\q\p\w\a\i\a\y\r\d\w\2\x\9\o\z\2\p\6\n\2\w\c\m\b\f\e\b\t\3\k\b\v\f\x\m\0\s\5\e\k\1\8\9\s\a\o\i\g\w\h\a\g\0\w\t\v\z\m\e\a\g\4\d\v\g\r\1\z\r\p\d\7\5\h\u\p\u\l\p\u\u\s\e\0\o\r\u\l\d\k\w\x\8\j\w\m\i\j\d\t\w\m\h\6\a\n\m\k\t\c\5\b\x\w\b\w\a\6\i\3\i\6\n\h\p\h\a\t\x\v\n\l\5\8\6\5\t\o\v\j\6\y\v\t\3\k\l\f\t\m\x\x\m\u\5\u\8\6\n\i\r\y\e\m\b\q\u\e\4\u\7\i\c\e\u\3\2\n\p\u\7\3\8\w\k\d\y\8\u\x\l\k\9\i\q\z\9\g\u\q\t\r\r\6\m\a\g\8\2\x\g\n\g\m\t\2\v\h\w\v\s\x\v\5\1\2\y\o\1\x\4\9\u\8\h\p\h\3\5\e\k\a\e\z\3\s\x\6\d\l\v\2\9\m\0\1\w\2\i\l\f\b\b\c\t\b\4\a\8\q\z\j\5\n\a\b\7\w\5\j\5\p\1\x\4\e\q\o\7\s\w\3\i\c\g\n\x\5\7\3\z\8\g\y\l\5\i\r\i\p\r\6\c\j\u\l\z\8\e\n\m\y\y\m\7\b\z\z\j\i\k\r\7\f\b\c\p\2\p\b\f\l\r\0\b\z\q\e\z\e\g\i\p\p\b\r\c\g\s\4\5\f\1\g\m\c\u\0\8\j\h\a\n\a\k\h\e\z\m\l\8\n\l\l\x\y\g\9\f\r\4\a\c\s\3\p\2\l\x\h\q\0\6\b\d\4\q\i\s\a\z\c\m\q\6\w\e\3\3\x\p\w\c\u\4\s\w\s\9\w\e\q\9\t\q\a\1\m\6\z\d\k\i\4\n\e\f\d\2\a\p\w\g\y\l\9\n\k\o\s\g\o\y\o\3\i\j\t\l\i\q\g\a\1\p\z\g\0\r\u\e\w\s\w\6\1\5\m\8\t\6\5\a\g\a\q\n\8\o\a\r\n\c\a\0\i\i\f\x\a\g\l\8\z\g\m\2\2\6\r\1\7\8\2\n\j\p\o\4\q\r\h\p\b\h\0\1\n\4\n\a\k\4\h\a\9\0\z\o\j\p\3\i\8\q\6\e\2\o\i\q\5\r\j\y\c\h\c\g\5\6\m\2\6\a\g\w\c\q\k\h\j\u\h\7\u\v\8\s\3\n\2\5\z\n\r\m\7\m\3\2\m\x\o\f\n\u\e\g\q\k\j\9\4\z\g\7\u\3\6\5\c\p\9\z\0\m\j\c\u\5\9\w\k\j\l\1\h\o\u\p\6\b\i\k\f\j\l\b\d\s\v\z\7\w\e\u\6\4\5\s\y\0\a\t\v\8\3\b\8\c\b\r\k\b\n\i\3\v\m\n\a\7\r\q\u\i\4\e\x\s\q\5\u\3\e\q\f\e\n\j\y\z\x\l\r\v\w\6\h\q\1\m\1\r\0\k\k\i\2\2\d\c\w\7\l\8\3\1\k\w\k\m\9\o\e\y\8\v\z\h\y\7\3\4\r\k\0\n\3\z\p\8\8\5\x\h\2\t\l\0\n\q\t\2\1\0\i\1\5\8\k\m\4\g\2\r\6\k\s\z\j\u\u\k\8\g\a\c\2\f\j\m\i\p\o\z\m\k\u\n\a\e\r\5\8\v\w\n\a\3\n\e\w\k\l\i\f\9\g\1\t\h\2\c\e\y\u\6\w\4\e\a\s\r\n\4\p\n\k\k\t\2\u\g\u\f\z\3\e\8\d\n\f\3\e\g\x\6\q\8\h\4\s\x\b\m\s\j\m\l\5\d\3\l\n\h\c\t\4\i\2\o\y\b\6\3\5\t\6\u\r\h\d\9\n\q\7\4\2\b\h\j\x\d\3\5\k\2\1\x\b\z\k\1\4\3\1\1\w\b\y\m\c\3\r\t\u\5\d\e\s\y\2\6\x\t\u\0\m\o\e\1\8\5\1\w\j\w\p\l\g\c\d\d\8\f\c\a\x\r\o\w\k\z\0\z\y\a\f\j\o\k\k\d\x\t\z\z\f\e\g\c\m\9\3\y\x\u\9\2\d\y\8\4\h\0\v\8\m\2\q\e\9\6\m\e\a\a\c\5\i\v\0\1\q\4\2\n\a\3\a\a\4\j\v\3\v\d\w\n\s\1\5\9\y\1\b\9\e\c\z\w\m\q\y\e\d\l\l\s\v\c\4\d\m\w\w\a\r\s\i\4\i\i\3\0\z\e\y\q\i\a\c\3\e\a\i\0\z\n\5\w\l\0\k\e\s\1\0\4\6\l\a\2\o\l\6\r\h\o\o\5\7\c\k\6\p\6\8\f\p\z\k\c\9\w\6\v\p\8\x\j\6\d\s\w\t\n\9\6\m\3\o\t\j\2\d\m\o\j\3\m\d\a\s\y\z\s\9\7\o\k\y\k\o\2\z\q\3\2\h\s\e\q\u\g\f\y\b\e\o\2\3\9\b\7\v\j\n\m\h\d\l\4\4\8\d\l\g\8\8\6\x\r\1\m\v\m\j\p\c\3\k\x\f\5\n\e\y\z\k\z\d\x\6\f\y\7\j\r\q\b\3\z\m\c\q\s\u\x\h\m\z\m\7\j\2\3\e\x\q\x\o\3\1\8\0\x\2\r\1\v\u\l\h\1\t\2\g\7\w\9\f\u\a\w\b\r\x\6\5\4\f\5\f\d\r\x\r\p\k\5\o\z\0\k\i\h\n\j\o\l\z\t\7\s\4\h\h\l\p\c\x\f\c\3\5\7\h\q\y\o\m\u\9\i\s\p\q\m\6\1\g\i\i\t\i\x\e\u\t\g\6\u\7\v\l\g\8\0\2\i\6\b\z\8\i\n\v\w\5\u\h\x\3\7\8\3\0\p\x\y\g\5\0\9\w\n\k\y\1\i\4\p\t\6\u\p\5\f\w\z\a\5\p\0\t\i\u\8\5\i\i\8\z\o\s\m\t\o\n\h\f\s\z\6\1\j\c\b\9\t\3\c\h\9\m\f\m\3\1\l\5\r\u\b\0\p\n\0\b\t\z\9\a\f\3\8\j\e\r\c\v\p\s\j\b\b\t\c\3\u\i\2\t\k\2\j\q\d\o\x\e\p\k\x\8\2\q\t\7\2\q\o\n\7\c\e\w\3\6\8\o\g\m\p\d\c\n\9\i\o\c\x\c\w\y\6\6\d\v\8\f\h\7\c\a\h\h\y\x\o\0\j\a\0\w\e\a\9\7\v\e\i\l\o\z\b\t\5\c\8\5\h\w\4\e\9\0\n\b\w\k\r\8\1\j\d\w\2\v\l\f\a\r\a\z\j\7\0\0\1\o\2\c\s\0\0\v\c\q\4\o\p\e\5\n\p\u\0\q\9\p\r\m\0\y\3\q\b\t\c\z\a\j\h\h\a\t\q\j\q\2\y\i\l\5\4\7\a\0\i\o\h\x\6\j\e\m\5\l\p\r\4\s\t\3\d\b\n\h\l\p\9\v\8\5\q\3\p\c\5\m\w\b\3\q\j\x\s\y\b\a\d\v\2\w\1\n\o\k\4\u\x\r\1\p\h\0\h\h\b\q\z\r\v\k\h\w\c\x\b\0\g\l\2\2\k\z\g\0\0\t\2\n\7\n\e\g\v\e\2\f\x\f\3\4\k\1\h\g\d\z\d\m\x\7\g\o\u\s\e\z\5\s\j\w\v\n\v\y\s\i\o\d\2\m\5\v\o\9\k\c\k\d\x\k\8\g\t\o\k\p\g\4\6\v\k\g\o\0\e\q\u\5\x\8\v\j\r\2\k\u\7\v\z\3\h\2\z\f\w\a\e\v\d\t\k\m\o\f\2\8\x\s\k\u\q\h\x\p\e\7\9\r\k\6\s\f\3\0\2\k\i\x\9\j\q\9\e\n\a\a\v\r\6\y\y\x\y\d\q\e\1\8\v\5\n\o\c\l\f\c\3\4\v\p\l\g\k\4\x\v\3\i\z\k\i\g\w\k\j\k\o\7\q\z\p\a\v\z\t\8\q\d\5\t\7\h\h\m\6\4\s\3\a\x\t\h\i\w\w\p\m\e\g\5\j\s\l\t\3\r\2\g\n\9\x\6\g\a\w\5\l\s\6\4\l\s\7\1\k\6\p\4\y\3\0\6\z\x\z\u\v\d\e\w\k\4\7\k\2\9\3\c\3\4\b\y\l\a\b\d\7\r\s\g\c\k\p\n\7\r\2\i\0\z\6\b\9\n\1\a\1\v\1\q\6\d\m\y\s\0\1\k\h\c\3\8\z\n\8\o\l\h\z\4\k\k\o\7\9\w\s\v\2\v\z\k\r\x\t\4\0\j\4\k\a\s\c\s\n\i\w\1\1\r\y\s\6\a\y\n\0\e\2\4\3\1\2\x\6\t\8\k\q\5\m\s\k\g\x\t\q\u\5\0\p\3\f\f\s\7\l\b\a\m\n\l\7\0\e\l\k\7\x\s\r\m\6\z\v\v\4\b\c\8\9\a\c\5\x\a\e\2\v\7\2\a\3\m\n\0\8\7\i\4\m\q\2\5\4\5\3\o\y\2\w\k\p\0\2\w\c\h\e\4\2\7\m\z\2\w\x\c\8\g\9\5\v\3\8\t\s\j\c\6\9\k\v\i\e\3\t\m\j\m\l\4\t\x\1\v\x\1\v\k\v\v\h\h\b\y\g\t\u\8\a\g\p\f\v\4\e\l\3\y\2\h\q\w\2\3\9\e\q\j\s\8\5\w\s\f\4\r\z\g\4\9\h\2\h\s\3\v\6\f\z\7\z\9\g\o\s\9\l\r\n\j\5\e\f\6\w\i\h\u\b\v\f\j\3\3\1\z\t\o\8\l\6\c\u\q\l\r\c\w\d\5\q\a\3\e\s\t\t\u\6\6\g\1\8\y\e\i\e\8\z\m\1\b\g\1\i\c\v\t\t\s\5\x\2\d\t\f\3\6\j\2\9\o\q\w\r\o\d\r\0\u\i\4\u\t\x\u\5\b\e\j\r\e\0\5\8\c\q\j\p\6\3\7\q\j\z\4\a\f\7\c\9\v\z\x\f\w\b\w\6\h\2\m\7\k\r\q\1\4\3\y\k\w\b\u\o\u\h\8\6\5\h\m\t\p\w\p\f\7\t\y\h\0\e\k\d\y\k\i\6\w\8\a\k\a\x\g\a\d\z\v\u\5\2\5\y\8\m\s\b\4\j\g\h\u\a\v\b\2\u\9\4\7\y\b\w\i\r\k\p\0\v\p\x\5\h\4\4\f\q\7\i\x\7\w\d\4\y\6\m\e\7\a\l\z\k\m\p\k\3\4\2\n\a\9\u\r\q\6\e\4\c\m\g\9\z\w\c\b\q\s\v\4\z\4\1\4\a\3\z\r\x\e\0\f\t\l\f\p\8\v\4\x\1\4\q\9\b\v\3\c\k\m\p\v\p\7\o\i\e\0\d\i\t\v\e\8\o\y\s\8\y\9\6\h\y\o\c\8\c\i\g\6\w\x\o\o\r\t\s\w\x\p\h\8\j\7\f\4\o\h\r\d\d\r\l\z\9\d\f\k\4\o\i\t\c\6\w\c\j\o\y\h\5\6\j\9\7\r\z\j\d\w\u\7\e\j\8\u\i\j\g\l\t\j\o\s\2\u\q\i\x\o\i\j\l\5\x\i\0\8\9\1\i\k\d\p\4\v\a\t\h\8\j\0\z\c\v\q\b\a\3\n\f\h\j\x\t\n\h\b\5\v\2\5\r\k\6\7\t\i\i\q\9\7\z\b\i\1\t\c\e\v\6\g\c\o\e\d\3\2\5\j\m\z\g\q\u\d\x\7\m\v\w\m\k\p\e\i\p\q\z\r\t\j\i\l\t\b\8\x\2\w\8\4\g\w\2\o\p\7\6\n\u\3\4\0\d\a\9\5\1\j\j\p\q\z\b\n\v\s\0\4\m\o\g\f\n\q\a\i\6\t\n\i\n\5\l\i\w\l\w\9\a\o\h\8\u\o\w\h\g\8\a\8\n\8\3\d\0\1\b\6\x\6\6\0\n\w\y\i\q\y\y\p\e\y\s\x\z\w\4\j\y\j\v\l\p\p\z\t\j\k\y\9\i\3\i\h\5\2\1\z\1\9\z\c\b\8\j\3\k\f\l\1\w\c\d\p\z\b\9\e\b\3\z\f\t\d\r\t\q\g\u\x\m\8\r\6\8\n\3\x\u\r\g\y\d\5\q\w\9\s\p\4\3\9\a\9\e\c\g\4\w\i\v\6\y\i\z\c\g\m\y\q\s\l\o\x\r\w\u\d\v\s\5\w\5\0\t\x\w\j\b\t\c\p\o\1\p\h\5\m\n\t\4\j\h\0\b\9\a\j\8\t\7\e\3\e\a\v\0\m\z\h\x\0\0\k\8\l\7\r\h\a\m\s\s\q\1\h\7\a\z\2\4\a\y\v\q\s\5\z\7\y\7\0\9\g\u\k\m\4\t\o\k\j\w\s\f\z\v\i\x\s\l\2\i\t\c\y\s\k\n\j\p\g\o\w\x\y\m\1\c\0\u\9\q\u\6\v\a\x\g\8\l\h\6\4\5\e\z\s\7\9\u\b\z\t\r\h\u\s\f\q\z\v\j\x\7\h\i\k\3\k\w\8\x\9\k\l\a\u\1\3\j\w\f\p\b\z\9\8\r\j\8\b\q\g\v\x\o\d\k\t\1\5\g\y\y\a\j\5\t\w\9\0\t\t\c\7\g\o\5\t\6\k\k\h\4\m\l\a\r\3\q\a\k\c\k\0\4\g\h\3\z\q\4\1\m\0\f\p\g\t\4\f\q\o\u\y\0\q\a\a\h\4\m\i\c\8\5\p\8\8\3\o\z\1\d\o\s\8\0\y\a\o\0\m\m\a\b\r\r\7\8\y\e\7\1\a\p\7\7\o\n\f\7\e\d\q\k\q\i\o\y\z\5\9\v\7\e\4\l\v\h\r\2\v\w\6\8\i\h\9\o\b\x\g\z\u\2\t\3\3\9\4\h\d\z\6\4\7\0\6\3\i\s\2\y\0\s\w\7\c\1\r\q\3\g\4\s\m\l\o\k\5\j\7\w\p\2\p\r\t\i\c\x\o\7\w\p\i\p\2\2\o\p\w\z\f\5\e\0\h\c\y\8\n\i\3\f\n\r\b\b\e\x\q\f\6\v\m\f\f\e\n\q\k\j\0\9\h\p\h\v\z\c\6\5\3\b\e\p\y\l\8\7\y\t\q\z\s\5\q\r\p\y\5\g\n\p\9\q\t\2\3\l\q\0\4\0\x\a\3\j\r\9\g\k\6\z\r\y\z\0\7\x\g\i\i\b\6\2 ]] 00:28:54.580 00:28:54.580 real 0m3.715s 00:28:54.580 user 0m3.081s 00:28:54.580 sys 0m0.491s 00:28:54.580 00:48:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:54.580 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:28:54.580 ************************************ 00:28:54.580 END TEST dd_rw_offset 00:28:54.580 ************************************ 00:28:54.580 00:48:28 -- dd/basic_rw.sh@1 -- # cleanup 00:28:54.580 00:48:28 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:28:54.580 00:48:28 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:54.580 00:48:28 -- dd/common.sh@11 -- # local nvme_ref= 00:28:54.580 00:48:28 -- dd/common.sh@12 -- # local size=0xffff 00:28:54.580 00:48:28 -- dd/common.sh@14 -- # local bs=1048576 00:28:54.580 00:48:28 -- dd/common.sh@15 -- # local count=1 00:28:54.580 00:48:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:28:54.580 00:48:28 -- dd/common.sh@18 -- # gen_conf 00:28:54.580 00:48:28 -- dd/common.sh@31 -- # xtrace_disable 00:28:54.580 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:28:54.580 { 00:28:54.580 "subsystems": [ 00:28:54.580 { 00:28:54.580 "subsystem": "bdev", 00:28:54.580 "config": [ 00:28:54.580 { 00:28:54.580 "params": { 00:28:54.580 "trtype": "pcie", 00:28:54.580 "traddr": "0000:00:10.0", 00:28:54.580 "name": "Nvme0" 00:28:54.580 }, 00:28:54.580 "method": "bdev_nvme_attach_controller" 00:28:54.580 }, 00:28:54.580 { 00:28:54.580 "method": "bdev_wait_for_examine" 00:28:54.580 } 00:28:54.580 ] 00:28:54.580 } 00:28:54.580 ] 00:28:54.580 } 00:28:54.580 [2024-04-27 00:48:28.125055] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:54.580 [2024-04-27 00:48:28.125296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142081 ] 00:28:54.839 [2024-04-27 00:48:28.293947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.107 [2024-04-27 00:48:28.483362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.755  Copying: 1024/1024 [kB] (average 500 MBps) 00:28:56.755 00:28:56.755 00:48:29 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:56.755 00:28:56.755 real 0m44.352s 00:28:56.755 user 0m36.591s 00:28:56.755 sys 0m6.092s 00:28:56.755 00:48:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:56.755 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.755 ************************************ 00:28:56.755 END TEST spdk_dd_basic_rw 00:28:56.755 ************************************ 00:28:56.755 00:48:29 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:28:56.755 00:48:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.755 00:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.755 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:28:56.755 ************************************ 00:28:56.755 START TEST spdk_dd_posix 00:28:56.755 ************************************ 00:28:56.755 00:48:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:28:56.755 * Looking for test storage... 00:28:56.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:56.755 00:48:30 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:56.755 00:48:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.755 00:48:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.755 00:48:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.755 00:48:30 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:56.755 00:48:30 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:56.755 00:48:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:56.755 00:48:30 -- paths/export.sh@5 -- # export PATH 00:28:56.755 00:48:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:56.755 00:48:30 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:28:56.755 00:48:30 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:28:56.755 00:48:30 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:28:56.755 00:48:30 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:28:56.755 00:48:30 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:56.755 00:48:30 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:56.755 00:48:30 -- dd/posix.sh@130 -- # tests 00:28:56.755 00:48:30 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:28:56.755 * First test run, using AIO 00:28:56.755 00:48:30 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:28:56.755 00:48:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:56.755 00:48:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.755 00:48:30 -- common/autotest_common.sh@10 -- # set +x 00:28:56.755 ************************************ 00:28:56.755 START TEST dd_flag_append 00:28:56.755 ************************************ 00:28:56.755 00:48:30 -- common/autotest_common.sh@1111 -- # append 00:28:56.755 00:48:30 -- dd/posix.sh@16 -- # local dump0 00:28:56.755 00:48:30 -- dd/posix.sh@17 -- # local dump1 00:28:56.755 00:48:30 -- dd/posix.sh@19 -- # gen_bytes 32 00:28:56.755 00:48:30 -- dd/common.sh@98 -- # xtrace_disable 00:28:56.755 00:48:30 -- common/autotest_common.sh@10 -- # set +x 00:28:56.755 00:48:30 -- dd/posix.sh@19 -- # dump0=kile96rrmu6elsgnyt5e5x7paa5sqnga 00:28:56.755 00:48:30 -- dd/posix.sh@20 -- # gen_bytes 32 00:28:56.755 00:48:30 -- dd/common.sh@98 -- # xtrace_disable 00:28:56.755 00:48:30 -- common/autotest_common.sh@10 -- # set +x 00:28:56.755 00:48:30 -- dd/posix.sh@20 -- # dump1=xowigubznd3yv56ae2l37yoyuyu5cotv 00:28:56.755 00:48:30 -- dd/posix.sh@22 -- # printf %s kile96rrmu6elsgnyt5e5x7paa5sqnga 00:28:56.755 00:48:30 -- dd/posix.sh@23 -- # printf %s xowigubznd3yv56ae2l37yoyuyu5cotv 00:28:56.755 00:48:30 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:28:56.755 [2024-04-27 00:48:30.245897] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:56.755 [2024-04-27 00:48:30.246695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142169 ] 00:28:57.013 [2024-04-27 00:48:30.417864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.271 [2024-04-27 00:48:30.655787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.465  Copying: 32/32 [B] (average 31 kBps) 00:28:58.465 00:28:58.465 00:48:32 -- dd/posix.sh@27 -- # [[ xowigubznd3yv56ae2l37yoyuyu5cotvkile96rrmu6elsgnyt5e5x7paa5sqnga == \x\o\w\i\g\u\b\z\n\d\3\y\v\5\6\a\e\2\l\3\7\y\o\y\u\y\u\5\c\o\t\v\k\i\l\e\9\6\r\r\m\u\6\e\l\s\g\n\y\t\5\e\5\x\7\p\a\a\5\s\q\n\g\a ]] 00:28:58.465 00:28:58.465 real 0m1.837s 00:28:58.465 user 0m1.476s 00:28:58.465 sys 0m0.231s 00:28:58.465 ************************************ 00:28:58.465 END TEST dd_flag_append 00:28:58.465 ************************************ 00:28:58.465 00:48:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:58.465 00:48:32 -- common/autotest_common.sh@10 -- # set +x 00:28:58.465 00:48:32 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:28:58.465 00:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:58.465 00:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:58.465 00:48:32 -- common/autotest_common.sh@10 -- # set +x 00:28:58.725 ************************************ 00:28:58.725 START TEST dd_flag_directory 00:28:58.725 ************************************ 00:28:58.725 00:48:32 -- common/autotest_common.sh@1111 -- # directory 00:28:58.725 00:48:32 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:58.725 00:48:32 -- common/autotest_common.sh@638 -- # local es=0 00:28:58.725 00:48:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:58.725 00:48:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.725 00:48:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:58.725 00:48:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.725 00:48:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:58.725 00:48:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.725 00:48:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:58.725 00:48:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:58.725 00:48:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:58.725 00:48:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:58.725 [2024-04-27 00:48:32.171486] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:28:58.725 [2024-04-27 00:48:32.171741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142229 ] 00:28:58.985 [2024-04-27 00:48:32.341672] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.985 [2024-04-27 00:48:32.542574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.552 [2024-04-27 00:48:32.850054] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:59.552 [2024-04-27 00:48:32.850162] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:59.553 [2024-04-27 00:48:32.850205] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:00.120 [2024-04-27 00:48:33.570798] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:00.690 00:48:33 -- common/autotest_common.sh@641 -- # es=236 00:29:00.690 00:48:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:00.690 00:48:33 -- common/autotest_common.sh@650 -- # es=108 00:29:00.690 00:48:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:00.690 00:48:33 -- common/autotest_common.sh@658 -- # es=1 00:29:00.690 00:48:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:00.690 00:48:33 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:00.690 00:48:33 -- common/autotest_common.sh@638 -- # local es=0 00:29:00.690 00:48:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:00.690 00:48:33 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:00.690 00:48:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:00.690 00:48:33 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:00.690 00:48:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:00.690 00:48:33 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:00.690 00:48:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:00.690 00:48:33 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:00.690 00:48:33 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:00.690 00:48:33 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:00.690 [2024-04-27 00:48:34.092434] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:00.690 [2024-04-27 00:48:34.092734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142256 ] 00:29:00.690 [2024-04-27 00:48:34.270510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.948 [2024-04-27 00:48:34.473217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.207 [2024-04-27 00:48:34.774153] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:01.207 [2024-04-27 00:48:34.774254] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:01.207 [2024-04-27 00:48:34.774301] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:02.142 [2024-04-27 00:48:35.460392] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:02.400 00:48:35 -- common/autotest_common.sh@641 -- # es=236 00:29:02.400 00:48:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:02.400 00:48:35 -- common/autotest_common.sh@650 -- # es=108 00:29:02.400 00:48:35 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:02.400 00:48:35 -- common/autotest_common.sh@658 -- # es=1 00:29:02.400 00:48:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:02.400 00:29:02.400 real 0m3.752s 00:29:02.400 user 0m3.061s 00:29:02.400 sys 0m0.492s 00:29:02.400 00:48:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:02.400 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.400 ************************************ 00:29:02.400 END TEST dd_flag_directory 00:29:02.400 ************************************ 00:29:02.400 00:48:35 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:29:02.400 00:48:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:02.400 00:48:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:02.400 00:48:35 -- common/autotest_common.sh@10 -- # set +x 00:29:02.400 ************************************ 00:29:02.400 START TEST dd_flag_nofollow 00:29:02.400 ************************************ 00:29:02.400 00:48:35 -- common/autotest_common.sh@1111 -- # nofollow 00:29:02.400 00:48:35 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:02.400 00:48:35 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:02.400 00:48:35 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:02.400 00:48:35 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:02.400 00:48:35 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:02.400 00:48:35 -- common/autotest_common.sh@638 -- # local es=0 00:29:02.400 00:48:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:02.400 00:48:35 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:02.400 00:48:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:02.400 00:48:35 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:02.400 00:48:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:02.400 00:48:35 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:02.400 00:48:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:02.400 00:48:35 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:02.400 00:48:35 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:02.400 00:48:35 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:02.672 [2024-04-27 00:48:36.017805] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:02.672 [2024-04-27 00:48:36.018024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142305 ] 00:29:02.672 [2024-04-27 00:48:36.190825] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.934 [2024-04-27 00:48:36.397403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.192 [2024-04-27 00:48:36.702638] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:29:03.192 [2024-04-27 00:48:36.702760] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:29:03.192 [2024-04-27 00:48:36.702820] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:04.128 [2024-04-27 00:48:37.437041] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:04.386 00:48:37 -- common/autotest_common.sh@641 -- # es=216 00:29:04.386 00:48:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:04.386 00:48:37 -- common/autotest_common.sh@650 -- # es=88 00:29:04.386 00:48:37 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:04.386 00:48:37 -- common/autotest_common.sh@658 -- # es=1 00:29:04.386 00:48:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:04.386 00:48:37 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:04.386 00:48:37 -- common/autotest_common.sh@638 -- # local es=0 00:29:04.386 00:48:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:04.386 00:48:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.386 00:48:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:04.386 00:48:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.386 00:48:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:04.386 00:48:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.386 00:48:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:04.386 00:48:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.386 00:48:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:04.386 00:48:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:04.387 [2024-04-27 00:48:37.914154] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:04.387 [2024-04-27 00:48:37.914440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142338 ] 00:29:04.645 [2024-04-27 00:48:38.082502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.904 [2024-04-27 00:48:38.299260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.162 [2024-04-27 00:48:38.578714] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:29:05.162 [2024-04-27 00:48:38.578825] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:29:05.162 [2024-04-27 00:48:38.578869] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:05.728 [2024-04-27 00:48:39.294406] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:06.295 00:48:39 -- common/autotest_common.sh@641 -- # es=216 00:29:06.295 00:48:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:06.295 00:48:39 -- common/autotest_common.sh@650 -- # es=88 00:29:06.295 00:48:39 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:06.295 00:48:39 -- common/autotest_common.sh@658 -- # es=1 00:29:06.295 00:48:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:06.295 00:48:39 -- dd/posix.sh@46 -- # gen_bytes 512 00:29:06.295 00:48:39 -- dd/common.sh@98 -- # xtrace_disable 00:29:06.295 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:29:06.295 00:48:39 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:06.295 [2024-04-27 00:48:39.745736] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:06.295 [2024-04-27 00:48:39.746453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142360 ] 00:29:06.552 [2024-04-27 00:48:39.910280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.552 [2024-04-27 00:48:40.122537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.056  Copying: 512/512 [B] (average 500 kBps) 00:29:08.056 00:29:08.056 00:48:41 -- dd/posix.sh@49 -- # [[ acs7dkluq29agb1j6u3f5fhdrnezvm6k9q27bspfzs7gu2jtuaw8y86omhyzz8iankfgvoz9f4x2q2enra2mes6xfq3ebxkbntcmnpirphs6x76gem6lhyw9wa7ftod55qlcplom7fvlkvv31vp9itxbpi1ybxkccm7yb6jqnm13vxt4k09jrulpkq8haxaofmmv6t1n1tivoobstgxx9qeec6j47x854w7rtjxohwtgm3cqbjqx1mtx2c6ej3yvmkved1hljrrc4h85lwzknxn7b472dzbhwaybmul3zi56aawautnx9h0ycjv24ohiqm86e0bvg34c2n2s1tf5llxof6b0x5ws9geha1a9brjlb98jvnrw6rzx6fg5bg92owttgafwge8r0mxm5zxpulefrmoplcc0qf450yfpdvftrytmzjb9hjg2g41ken64mr90g77l3dzg6nc2chizgeiudmroh3ogbyldtgo8vgfy18cpp1t76id2xhtproxd == \a\c\s\7\d\k\l\u\q\2\9\a\g\b\1\j\6\u\3\f\5\f\h\d\r\n\e\z\v\m\6\k\9\q\2\7\b\s\p\f\z\s\7\g\u\2\j\t\u\a\w\8\y\8\6\o\m\h\y\z\z\8\i\a\n\k\f\g\v\o\z\9\f\4\x\2\q\2\e\n\r\a\2\m\e\s\6\x\f\q\3\e\b\x\k\b\n\t\c\m\n\p\i\r\p\h\s\6\x\7\6\g\e\m\6\l\h\y\w\9\w\a\7\f\t\o\d\5\5\q\l\c\p\l\o\m\7\f\v\l\k\v\v\3\1\v\p\9\i\t\x\b\p\i\1\y\b\x\k\c\c\m\7\y\b\6\j\q\n\m\1\3\v\x\t\4\k\0\9\j\r\u\l\p\k\q\8\h\a\x\a\o\f\m\m\v\6\t\1\n\1\t\i\v\o\o\b\s\t\g\x\x\9\q\e\e\c\6\j\4\7\x\8\5\4\w\7\r\t\j\x\o\h\w\t\g\m\3\c\q\b\j\q\x\1\m\t\x\2\c\6\e\j\3\y\v\m\k\v\e\d\1\h\l\j\r\r\c\4\h\8\5\l\w\z\k\n\x\n\7\b\4\7\2\d\z\b\h\w\a\y\b\m\u\l\3\z\i\5\6\a\a\w\a\u\t\n\x\9\h\0\y\c\j\v\2\4\o\h\i\q\m\8\6\e\0\b\v\g\3\4\c\2\n\2\s\1\t\f\5\l\l\x\o\f\6\b\0\x\5\w\s\9\g\e\h\a\1\a\9\b\r\j\l\b\9\8\j\v\n\r\w\6\r\z\x\6\f\g\5\b\g\9\2\o\w\t\t\g\a\f\w\g\e\8\r\0\m\x\m\5\z\x\p\u\l\e\f\r\m\o\p\l\c\c\0\q\f\4\5\0\y\f\p\d\v\f\t\r\y\t\m\z\j\b\9\h\j\g\2\g\4\1\k\e\n\6\4\m\r\9\0\g\7\7\l\3\d\z\g\6\n\c\2\c\h\i\z\g\e\i\u\d\m\r\o\h\3\o\g\b\y\l\d\t\g\o\8\v\g\f\y\1\8\c\p\p\1\t\7\6\i\d\2\x\h\t\p\r\o\x\d ]] 00:29:08.056 00:29:08.056 real 0m5.566s 00:29:08.056 user 0m4.530s 00:29:08.056 sys 0m0.700s 00:29:08.056 ************************************ 00:29:08.056 END TEST dd_flag_nofollow 00:29:08.056 ************************************ 00:29:08.056 00:48:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:08.056 00:48:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.056 00:48:41 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:29:08.056 00:48:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:08.056 00:48:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:08.056 00:48:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.056 ************************************ 00:29:08.056 START TEST dd_flag_noatime 00:29:08.056 ************************************ 00:29:08.056 00:48:41 -- common/autotest_common.sh@1111 -- # noatime 00:29:08.056 00:48:41 -- dd/posix.sh@53 -- # local atime_if 00:29:08.056 00:48:41 -- dd/posix.sh@54 -- # local atime_of 00:29:08.056 00:48:41 -- dd/posix.sh@58 -- # gen_bytes 512 00:29:08.056 00:48:41 -- dd/common.sh@98 -- # xtrace_disable 00:29:08.056 00:48:41 -- common/autotest_common.sh@10 -- # set +x 00:29:08.056 00:48:41 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:08.056 00:48:41 -- dd/posix.sh@60 -- # atime_if=1714178920 00:29:08.056 00:48:41 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:08.056 00:48:41 -- dd/posix.sh@61 -- # atime_of=1714178921 00:29:08.056 00:48:41 -- dd/posix.sh@66 -- # sleep 1 00:29:09.425 00:48:42 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:09.425 [2024-04-27 00:48:42.689607] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:09.425 [2024-04-27 00:48:42.689832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142423 ] 00:29:09.425 [2024-04-27 00:48:42.861309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.694 [2024-04-27 00:48:43.074187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.901  Copying: 512/512 [B] (average 500 kBps) 00:29:10.901 00:29:10.901 00:48:44 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:10.901 00:48:44 -- dd/posix.sh@69 -- # (( atime_if == 1714178920 )) 00:29:10.901 00:48:44 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:10.901 00:48:44 -- dd/posix.sh@70 -- # (( atime_of == 1714178921 )) 00:29:10.901 00:48:44 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:11.160 [2024-04-27 00:48:44.548613] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:11.160 [2024-04-27 00:48:44.548800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142453 ] 00:29:11.160 [2024-04-27 00:48:44.719356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.419 [2024-04-27 00:48:44.957758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.056  Copying: 512/512 [B] (average 500 kBps) 00:29:13.056 00:29:13.056 00:48:46 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:13.056 00:48:46 -- dd/posix.sh@73 -- # (( atime_if < 1714178925 )) 00:29:13.056 00:29:13.056 real 0m4.778s 00:29:13.056 user 0m3.042s 00:29:13.056 sys 0m0.481s 00:29:13.056 00:48:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:13.056 00:48:46 -- common/autotest_common.sh@10 -- # set +x 00:29:13.056 ************************************ 00:29:13.056 END TEST dd_flag_noatime 00:29:13.056 ************************************ 00:29:13.056 00:48:46 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:29:13.056 00:48:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.056 00:48:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.056 00:48:46 -- common/autotest_common.sh@10 -- # set +x 00:29:13.056 ************************************ 00:29:13.056 START TEST dd_flags_misc 00:29:13.056 ************************************ 00:29:13.056 00:48:46 -- common/autotest_common.sh@1111 -- # io 00:29:13.056 00:48:46 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:29:13.056 00:48:46 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:29:13.056 00:48:46 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:29:13.056 00:48:46 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:29:13.056 00:48:46 -- dd/posix.sh@86 -- # gen_bytes 512 00:29:13.056 00:48:46 -- dd/common.sh@98 -- # xtrace_disable 00:29:13.056 00:48:46 -- common/autotest_common.sh@10 -- # set +x 00:29:13.056 00:48:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:13.056 00:48:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:29:13.056 [2024-04-27 00:48:46.540654] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:13.056 [2024-04-27 00:48:46.540852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142501 ] 00:29:13.315 [2024-04-27 00:48:46.710325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.573 [2024-04-27 00:48:46.927759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.212  Copying: 512/512 [B] (average 500 kBps) 00:29:15.212 00:29:15.212 00:48:48 -- dd/posix.sh@93 -- # [[ 9h7i811qbz7z2552ncw00rzvye45xyc0x0corozos6ugyy60dsop32a1pgbu6yarzuk4kd7f43r3x5qoigc2avwpyxftio5bguam55xwkbe1ixemzcld0jhes9kr8rkcvil7id9bhzvyqw93v9atdyi455v172epoazccxbsw5wejbdzoa0p30qsqmfj17y2evfjih9x7dekszl6b5d5evu71nueadlrzjjwish3fnx5vw92l33hrc5ltg2ocol7zvcjckj1bdvp95690mfmzkjuvd4e3no0naft2tlyeathb4678qjst6ge6vcnxlgwrrtyq7qoy4f3rj40mgez3pm4m1bk8wu31cavex3nxhnqx9uy3jcpcdtqfdrn86n62aox3yjvi80blc226oyejwj85f3cusfgowq5pps3172qe6ocvlewtuxpgzcadqp8zvhlqe78m7yj0p3ncaut6wz1ybojz8g3ji51thixyf30u0xku95u7j3dnpxfppu5 == \9\h\7\i\8\1\1\q\b\z\7\z\2\5\5\2\n\c\w\0\0\r\z\v\y\e\4\5\x\y\c\0\x\0\c\o\r\o\z\o\s\6\u\g\y\y\6\0\d\s\o\p\3\2\a\1\p\g\b\u\6\y\a\r\z\u\k\4\k\d\7\f\4\3\r\3\x\5\q\o\i\g\c\2\a\v\w\p\y\x\f\t\i\o\5\b\g\u\a\m\5\5\x\w\k\b\e\1\i\x\e\m\z\c\l\d\0\j\h\e\s\9\k\r\8\r\k\c\v\i\l\7\i\d\9\b\h\z\v\y\q\w\9\3\v\9\a\t\d\y\i\4\5\5\v\1\7\2\e\p\o\a\z\c\c\x\b\s\w\5\w\e\j\b\d\z\o\a\0\p\3\0\q\s\q\m\f\j\1\7\y\2\e\v\f\j\i\h\9\x\7\d\e\k\s\z\l\6\b\5\d\5\e\v\u\7\1\n\u\e\a\d\l\r\z\j\j\w\i\s\h\3\f\n\x\5\v\w\9\2\l\3\3\h\r\c\5\l\t\g\2\o\c\o\l\7\z\v\c\j\c\k\j\1\b\d\v\p\9\5\6\9\0\m\f\m\z\k\j\u\v\d\4\e\3\n\o\0\n\a\f\t\2\t\l\y\e\a\t\h\b\4\6\7\8\q\j\s\t\6\g\e\6\v\c\n\x\l\g\w\r\r\t\y\q\7\q\o\y\4\f\3\r\j\4\0\m\g\e\z\3\p\m\4\m\1\b\k\8\w\u\3\1\c\a\v\e\x\3\n\x\h\n\q\x\9\u\y\3\j\c\p\c\d\t\q\f\d\r\n\8\6\n\6\2\a\o\x\3\y\j\v\i\8\0\b\l\c\2\2\6\o\y\e\j\w\j\8\5\f\3\c\u\s\f\g\o\w\q\5\p\p\s\3\1\7\2\q\e\6\o\c\v\l\e\w\t\u\x\p\g\z\c\a\d\q\p\8\z\v\h\l\q\e\7\8\m\7\y\j\0\p\3\n\c\a\u\t\6\w\z\1\y\b\o\j\z\8\g\3\j\i\5\1\t\h\i\x\y\f\3\0\u\0\x\k\u\9\5\u\7\j\3\d\n\p\x\f\p\p\u\5 ]] 00:29:15.212 00:48:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:15.212 00:48:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:29:15.212 [2024-04-27 00:48:48.500652] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:15.212 [2024-04-27 00:48:48.500841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142533 ] 00:29:15.212 [2024-04-27 00:48:48.669888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.471 [2024-04-27 00:48:48.904675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.107  Copying: 512/512 [B] (average 500 kBps) 00:29:17.107 00:29:17.107 00:48:50 -- dd/posix.sh@93 -- # [[ 9h7i811qbz7z2552ncw00rzvye45xyc0x0corozos6ugyy60dsop32a1pgbu6yarzuk4kd7f43r3x5qoigc2avwpyxftio5bguam55xwkbe1ixemzcld0jhes9kr8rkcvil7id9bhzvyqw93v9atdyi455v172epoazccxbsw5wejbdzoa0p30qsqmfj17y2evfjih9x7dekszl6b5d5evu71nueadlrzjjwish3fnx5vw92l33hrc5ltg2ocol7zvcjckj1bdvp95690mfmzkjuvd4e3no0naft2tlyeathb4678qjst6ge6vcnxlgwrrtyq7qoy4f3rj40mgez3pm4m1bk8wu31cavex3nxhnqx9uy3jcpcdtqfdrn86n62aox3yjvi80blc226oyejwj85f3cusfgowq5pps3172qe6ocvlewtuxpgzcadqp8zvhlqe78m7yj0p3ncaut6wz1ybojz8g3ji51thixyf30u0xku95u7j3dnpxfppu5 == \9\h\7\i\8\1\1\q\b\z\7\z\2\5\5\2\n\c\w\0\0\r\z\v\y\e\4\5\x\y\c\0\x\0\c\o\r\o\z\o\s\6\u\g\y\y\6\0\d\s\o\p\3\2\a\1\p\g\b\u\6\y\a\r\z\u\k\4\k\d\7\f\4\3\r\3\x\5\q\o\i\g\c\2\a\v\w\p\y\x\f\t\i\o\5\b\g\u\a\m\5\5\x\w\k\b\e\1\i\x\e\m\z\c\l\d\0\j\h\e\s\9\k\r\8\r\k\c\v\i\l\7\i\d\9\b\h\z\v\y\q\w\9\3\v\9\a\t\d\y\i\4\5\5\v\1\7\2\e\p\o\a\z\c\c\x\b\s\w\5\w\e\j\b\d\z\o\a\0\p\3\0\q\s\q\m\f\j\1\7\y\2\e\v\f\j\i\h\9\x\7\d\e\k\s\z\l\6\b\5\d\5\e\v\u\7\1\n\u\e\a\d\l\r\z\j\j\w\i\s\h\3\f\n\x\5\v\w\9\2\l\3\3\h\r\c\5\l\t\g\2\o\c\o\l\7\z\v\c\j\c\k\j\1\b\d\v\p\9\5\6\9\0\m\f\m\z\k\j\u\v\d\4\e\3\n\o\0\n\a\f\t\2\t\l\y\e\a\t\h\b\4\6\7\8\q\j\s\t\6\g\e\6\v\c\n\x\l\g\w\r\r\t\y\q\7\q\o\y\4\f\3\r\j\4\0\m\g\e\z\3\p\m\4\m\1\b\k\8\w\u\3\1\c\a\v\e\x\3\n\x\h\n\q\x\9\u\y\3\j\c\p\c\d\t\q\f\d\r\n\8\6\n\6\2\a\o\x\3\y\j\v\i\8\0\b\l\c\2\2\6\o\y\e\j\w\j\8\5\f\3\c\u\s\f\g\o\w\q\5\p\p\s\3\1\7\2\q\e\6\o\c\v\l\e\w\t\u\x\p\g\z\c\a\d\q\p\8\z\v\h\l\q\e\7\8\m\7\y\j\0\p\3\n\c\a\u\t\6\w\z\1\y\b\o\j\z\8\g\3\j\i\5\1\t\h\i\x\y\f\3\0\u\0\x\k\u\9\5\u\7\j\3\d\n\p\x\f\p\p\u\5 ]] 00:29:17.107 00:48:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:17.107 00:48:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:29:17.107 [2024-04-27 00:48:50.396128] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:17.107 [2024-04-27 00:48:50.396341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142558 ] 00:29:17.107 [2024-04-27 00:48:50.564471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.366 [2024-04-27 00:48:50.756599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.562  Copying: 512/512 [B] (average 125 kBps) 00:29:18.562 00:29:18.562 00:48:52 -- dd/posix.sh@93 -- # [[ 9h7i811qbz7z2552ncw00rzvye45xyc0x0corozos6ugyy60dsop32a1pgbu6yarzuk4kd7f43r3x5qoigc2avwpyxftio5bguam55xwkbe1ixemzcld0jhes9kr8rkcvil7id9bhzvyqw93v9atdyi455v172epoazccxbsw5wejbdzoa0p30qsqmfj17y2evfjih9x7dekszl6b5d5evu71nueadlrzjjwish3fnx5vw92l33hrc5ltg2ocol7zvcjckj1bdvp95690mfmzkjuvd4e3no0naft2tlyeathb4678qjst6ge6vcnxlgwrrtyq7qoy4f3rj40mgez3pm4m1bk8wu31cavex3nxhnqx9uy3jcpcdtqfdrn86n62aox3yjvi80blc226oyejwj85f3cusfgowq5pps3172qe6ocvlewtuxpgzcadqp8zvhlqe78m7yj0p3ncaut6wz1ybojz8g3ji51thixyf30u0xku95u7j3dnpxfppu5 == \9\h\7\i\8\1\1\q\b\z\7\z\2\5\5\2\n\c\w\0\0\r\z\v\y\e\4\5\x\y\c\0\x\0\c\o\r\o\z\o\s\6\u\g\y\y\6\0\d\s\o\p\3\2\a\1\p\g\b\u\6\y\a\r\z\u\k\4\k\d\7\f\4\3\r\3\x\5\q\o\i\g\c\2\a\v\w\p\y\x\f\t\i\o\5\b\g\u\a\m\5\5\x\w\k\b\e\1\i\x\e\m\z\c\l\d\0\j\h\e\s\9\k\r\8\r\k\c\v\i\l\7\i\d\9\b\h\z\v\y\q\w\9\3\v\9\a\t\d\y\i\4\5\5\v\1\7\2\e\p\o\a\z\c\c\x\b\s\w\5\w\e\j\b\d\z\o\a\0\p\3\0\q\s\q\m\f\j\1\7\y\2\e\v\f\j\i\h\9\x\7\d\e\k\s\z\l\6\b\5\d\5\e\v\u\7\1\n\u\e\a\d\l\r\z\j\j\w\i\s\h\3\f\n\x\5\v\w\9\2\l\3\3\h\r\c\5\l\t\g\2\o\c\o\l\7\z\v\c\j\c\k\j\1\b\d\v\p\9\5\6\9\0\m\f\m\z\k\j\u\v\d\4\e\3\n\o\0\n\a\f\t\2\t\l\y\e\a\t\h\b\4\6\7\8\q\j\s\t\6\g\e\6\v\c\n\x\l\g\w\r\r\t\y\q\7\q\o\y\4\f\3\r\j\4\0\m\g\e\z\3\p\m\4\m\1\b\k\8\w\u\3\1\c\a\v\e\x\3\n\x\h\n\q\x\9\u\y\3\j\c\p\c\d\t\q\f\d\r\n\8\6\n\6\2\a\o\x\3\y\j\v\i\8\0\b\l\c\2\2\6\o\y\e\j\w\j\8\5\f\3\c\u\s\f\g\o\w\q\5\p\p\s\3\1\7\2\q\e\6\o\c\v\l\e\w\t\u\x\p\g\z\c\a\d\q\p\8\z\v\h\l\q\e\7\8\m\7\y\j\0\p\3\n\c\a\u\t\6\w\z\1\y\b\o\j\z\8\g\3\j\i\5\1\t\h\i\x\y\f\3\0\u\0\x\k\u\9\5\u\7\j\3\d\n\p\x\f\p\p\u\5 ]] 00:29:18.562 00:48:52 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:18.562 00:48:52 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:29:18.562 [2024-04-27 00:48:52.122988] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:18.562 [2024-04-27 00:48:52.123227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142586 ] 00:29:18.821 [2024-04-27 00:48:52.295113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.080 [2024-04-27 00:48:52.503226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.271  Copying: 512/512 [B] (average 125 kBps) 00:29:20.272 00:29:20.272 00:48:53 -- dd/posix.sh@93 -- # [[ 9h7i811qbz7z2552ncw00rzvye45xyc0x0corozos6ugyy60dsop32a1pgbu6yarzuk4kd7f43r3x5qoigc2avwpyxftio5bguam55xwkbe1ixemzcld0jhes9kr8rkcvil7id9bhzvyqw93v9atdyi455v172epoazccxbsw5wejbdzoa0p30qsqmfj17y2evfjih9x7dekszl6b5d5evu71nueadlrzjjwish3fnx5vw92l33hrc5ltg2ocol7zvcjckj1bdvp95690mfmzkjuvd4e3no0naft2tlyeathb4678qjst6ge6vcnxlgwrrtyq7qoy4f3rj40mgez3pm4m1bk8wu31cavex3nxhnqx9uy3jcpcdtqfdrn86n62aox3yjvi80blc226oyejwj85f3cusfgowq5pps3172qe6ocvlewtuxpgzcadqp8zvhlqe78m7yj0p3ncaut6wz1ybojz8g3ji51thixyf30u0xku95u7j3dnpxfppu5 == \9\h\7\i\8\1\1\q\b\z\7\z\2\5\5\2\n\c\w\0\0\r\z\v\y\e\4\5\x\y\c\0\x\0\c\o\r\o\z\o\s\6\u\g\y\y\6\0\d\s\o\p\3\2\a\1\p\g\b\u\6\y\a\r\z\u\k\4\k\d\7\f\4\3\r\3\x\5\q\o\i\g\c\2\a\v\w\p\y\x\f\t\i\o\5\b\g\u\a\m\5\5\x\w\k\b\e\1\i\x\e\m\z\c\l\d\0\j\h\e\s\9\k\r\8\r\k\c\v\i\l\7\i\d\9\b\h\z\v\y\q\w\9\3\v\9\a\t\d\y\i\4\5\5\v\1\7\2\e\p\o\a\z\c\c\x\b\s\w\5\w\e\j\b\d\z\o\a\0\p\3\0\q\s\q\m\f\j\1\7\y\2\e\v\f\j\i\h\9\x\7\d\e\k\s\z\l\6\b\5\d\5\e\v\u\7\1\n\u\e\a\d\l\r\z\j\j\w\i\s\h\3\f\n\x\5\v\w\9\2\l\3\3\h\r\c\5\l\t\g\2\o\c\o\l\7\z\v\c\j\c\k\j\1\b\d\v\p\9\5\6\9\0\m\f\m\z\k\j\u\v\d\4\e\3\n\o\0\n\a\f\t\2\t\l\y\e\a\t\h\b\4\6\7\8\q\j\s\t\6\g\e\6\v\c\n\x\l\g\w\r\r\t\y\q\7\q\o\y\4\f\3\r\j\4\0\m\g\e\z\3\p\m\4\m\1\b\k\8\w\u\3\1\c\a\v\e\x\3\n\x\h\n\q\x\9\u\y\3\j\c\p\c\d\t\q\f\d\r\n\8\6\n\6\2\a\o\x\3\y\j\v\i\8\0\b\l\c\2\2\6\o\y\e\j\w\j\8\5\f\3\c\u\s\f\g\o\w\q\5\p\p\s\3\1\7\2\q\e\6\o\c\v\l\e\w\t\u\x\p\g\z\c\a\d\q\p\8\z\v\h\l\q\e\7\8\m\7\y\j\0\p\3\n\c\a\u\t\6\w\z\1\y\b\o\j\z\8\g\3\j\i\5\1\t\h\i\x\y\f\3\0\u\0\x\k\u\9\5\u\7\j\3\d\n\p\x\f\p\p\u\5 ]] 00:29:20.272 00:48:53 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:29:20.272 00:48:53 -- dd/posix.sh@86 -- # gen_bytes 512 00:29:20.272 00:48:53 -- dd/common.sh@98 -- # xtrace_disable 00:29:20.272 00:48:53 -- common/autotest_common.sh@10 -- # set +x 00:29:20.272 00:48:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:20.272 00:48:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:29:20.530 [2024-04-27 00:48:53.910390] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:20.530 [2024-04-27 00:48:53.910574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142608 ] 00:29:20.530 [2024-04-27 00:48:54.078221] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.788 [2024-04-27 00:48:54.365575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.293  Copying: 512/512 [B] (average 500 kBps) 00:29:22.293 00:29:22.293 00:48:55 -- dd/posix.sh@93 -- # [[ dkia7jexguqfxrxgmun5zu493hn89vg5llfr7m3qqu2ul3vjs1074e38qykrem2zj6ey15sd71vl369z5irq1cq3uok99wd0of3kzv1yy13nw2vm9zkzti0988tbl9bxrw4ivxw2wr8yhzu96io3t9zndb8nzsoo0cuvaf1rz2u7idl3z1jm4jmedsf9e77n58tx1pje4ql9jjr1xa02kzkl5drkwp60qin0730qz1rjs6d98xg7bpbjtwxpdhzzhssy1b73ipbksog0xtixbwx622a4sbsvjvi5k10k8yhn4hwx6eltg9a494g1gzvhqv1il71o2zsrwe2edsw8mpojo8xavf8ws8g27ktz85psl3pwz2oaem91felp35oi6t44f2m5e01uj9eya7gznztdj6ajde4nd22bmbqgk00gkacas5kpiwiv94xvcmo763hsy6z77pup2mzgz5y2w2yfl1r6p05v3w37adu46od9mfjh5v52xfh2xf4d0w4k == \d\k\i\a\7\j\e\x\g\u\q\f\x\r\x\g\m\u\n\5\z\u\4\9\3\h\n\8\9\v\g\5\l\l\f\r\7\m\3\q\q\u\2\u\l\3\v\j\s\1\0\7\4\e\3\8\q\y\k\r\e\m\2\z\j\6\e\y\1\5\s\d\7\1\v\l\3\6\9\z\5\i\r\q\1\c\q\3\u\o\k\9\9\w\d\0\o\f\3\k\z\v\1\y\y\1\3\n\w\2\v\m\9\z\k\z\t\i\0\9\8\8\t\b\l\9\b\x\r\w\4\i\v\x\w\2\w\r\8\y\h\z\u\9\6\i\o\3\t\9\z\n\d\b\8\n\z\s\o\o\0\c\u\v\a\f\1\r\z\2\u\7\i\d\l\3\z\1\j\m\4\j\m\e\d\s\f\9\e\7\7\n\5\8\t\x\1\p\j\e\4\q\l\9\j\j\r\1\x\a\0\2\k\z\k\l\5\d\r\k\w\p\6\0\q\i\n\0\7\3\0\q\z\1\r\j\s\6\d\9\8\x\g\7\b\p\b\j\t\w\x\p\d\h\z\z\h\s\s\y\1\b\7\3\i\p\b\k\s\o\g\0\x\t\i\x\b\w\x\6\2\2\a\4\s\b\s\v\j\v\i\5\k\1\0\k\8\y\h\n\4\h\w\x\6\e\l\t\g\9\a\4\9\4\g\1\g\z\v\h\q\v\1\i\l\7\1\o\2\z\s\r\w\e\2\e\d\s\w\8\m\p\o\j\o\8\x\a\v\f\8\w\s\8\g\2\7\k\t\z\8\5\p\s\l\3\p\w\z\2\o\a\e\m\9\1\f\e\l\p\3\5\o\i\6\t\4\4\f\2\m\5\e\0\1\u\j\9\e\y\a\7\g\z\n\z\t\d\j\6\a\j\d\e\4\n\d\2\2\b\m\b\q\g\k\0\0\g\k\a\c\a\s\5\k\p\i\w\i\v\9\4\x\v\c\m\o\7\6\3\h\s\y\6\z\7\7\p\u\p\2\m\z\g\z\5\y\2\w\2\y\f\l\1\r\6\p\0\5\v\3\w\3\7\a\d\u\4\6\o\d\9\m\f\j\h\5\v\5\2\x\f\h\2\x\f\4\d\0\w\4\k ]] 00:29:22.293 00:48:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:22.293 00:48:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:29:22.552 [2024-04-27 00:48:55.885808] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:22.552 [2024-04-27 00:48:55.886015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142632 ] 00:29:22.552 [2024-04-27 00:48:56.049493] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.811 [2024-04-27 00:48:56.311434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.448  Copying: 512/512 [B] (average 500 kBps) 00:29:24.448 00:29:24.448 00:48:57 -- dd/posix.sh@93 -- # [[ dkia7jexguqfxrxgmun5zu493hn89vg5llfr7m3qqu2ul3vjs1074e38qykrem2zj6ey15sd71vl369z5irq1cq3uok99wd0of3kzv1yy13nw2vm9zkzti0988tbl9bxrw4ivxw2wr8yhzu96io3t9zndb8nzsoo0cuvaf1rz2u7idl3z1jm4jmedsf9e77n58tx1pje4ql9jjr1xa02kzkl5drkwp60qin0730qz1rjs6d98xg7bpbjtwxpdhzzhssy1b73ipbksog0xtixbwx622a4sbsvjvi5k10k8yhn4hwx6eltg9a494g1gzvhqv1il71o2zsrwe2edsw8mpojo8xavf8ws8g27ktz85psl3pwz2oaem91felp35oi6t44f2m5e01uj9eya7gznztdj6ajde4nd22bmbqgk00gkacas5kpiwiv94xvcmo763hsy6z77pup2mzgz5y2w2yfl1r6p05v3w37adu46od9mfjh5v52xfh2xf4d0w4k == \d\k\i\a\7\j\e\x\g\u\q\f\x\r\x\g\m\u\n\5\z\u\4\9\3\h\n\8\9\v\g\5\l\l\f\r\7\m\3\q\q\u\2\u\l\3\v\j\s\1\0\7\4\e\3\8\q\y\k\r\e\m\2\z\j\6\e\y\1\5\s\d\7\1\v\l\3\6\9\z\5\i\r\q\1\c\q\3\u\o\k\9\9\w\d\0\o\f\3\k\z\v\1\y\y\1\3\n\w\2\v\m\9\z\k\z\t\i\0\9\8\8\t\b\l\9\b\x\r\w\4\i\v\x\w\2\w\r\8\y\h\z\u\9\6\i\o\3\t\9\z\n\d\b\8\n\z\s\o\o\0\c\u\v\a\f\1\r\z\2\u\7\i\d\l\3\z\1\j\m\4\j\m\e\d\s\f\9\e\7\7\n\5\8\t\x\1\p\j\e\4\q\l\9\j\j\r\1\x\a\0\2\k\z\k\l\5\d\r\k\w\p\6\0\q\i\n\0\7\3\0\q\z\1\r\j\s\6\d\9\8\x\g\7\b\p\b\j\t\w\x\p\d\h\z\z\h\s\s\y\1\b\7\3\i\p\b\k\s\o\g\0\x\t\i\x\b\w\x\6\2\2\a\4\s\b\s\v\j\v\i\5\k\1\0\k\8\y\h\n\4\h\w\x\6\e\l\t\g\9\a\4\9\4\g\1\g\z\v\h\q\v\1\i\l\7\1\o\2\z\s\r\w\e\2\e\d\s\w\8\m\p\o\j\o\8\x\a\v\f\8\w\s\8\g\2\7\k\t\z\8\5\p\s\l\3\p\w\z\2\o\a\e\m\9\1\f\e\l\p\3\5\o\i\6\t\4\4\f\2\m\5\e\0\1\u\j\9\e\y\a\7\g\z\n\z\t\d\j\6\a\j\d\e\4\n\d\2\2\b\m\b\q\g\k\0\0\g\k\a\c\a\s\5\k\p\i\w\i\v\9\4\x\v\c\m\o\7\6\3\h\s\y\6\z\7\7\p\u\p\2\m\z\g\z\5\y\2\w\2\y\f\l\1\r\6\p\0\5\v\3\w\3\7\a\d\u\4\6\o\d\9\m\f\j\h\5\v\5\2\x\f\h\2\x\f\4\d\0\w\4\k ]] 00:29:24.448 00:48:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:24.449 00:48:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:29:24.449 [2024-04-27 00:48:57.841452] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:24.449 [2024-04-27 00:48:57.841644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142661 ] 00:29:24.449 [2024-04-27 00:48:58.010654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.707 [2024-04-27 00:48:58.221405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.343  Copying: 512/512 [B] (average 250 kBps) 00:29:26.343 00:29:26.343 00:48:59 -- dd/posix.sh@93 -- # [[ dkia7jexguqfxrxgmun5zu493hn89vg5llfr7m3qqu2ul3vjs1074e38qykrem2zj6ey15sd71vl369z5irq1cq3uok99wd0of3kzv1yy13nw2vm9zkzti0988tbl9bxrw4ivxw2wr8yhzu96io3t9zndb8nzsoo0cuvaf1rz2u7idl3z1jm4jmedsf9e77n58tx1pje4ql9jjr1xa02kzkl5drkwp60qin0730qz1rjs6d98xg7bpbjtwxpdhzzhssy1b73ipbksog0xtixbwx622a4sbsvjvi5k10k8yhn4hwx6eltg9a494g1gzvhqv1il71o2zsrwe2edsw8mpojo8xavf8ws8g27ktz85psl3pwz2oaem91felp35oi6t44f2m5e01uj9eya7gznztdj6ajde4nd22bmbqgk00gkacas5kpiwiv94xvcmo763hsy6z77pup2mzgz5y2w2yfl1r6p05v3w37adu46od9mfjh5v52xfh2xf4d0w4k == \d\k\i\a\7\j\e\x\g\u\q\f\x\r\x\g\m\u\n\5\z\u\4\9\3\h\n\8\9\v\g\5\l\l\f\r\7\m\3\q\q\u\2\u\l\3\v\j\s\1\0\7\4\e\3\8\q\y\k\r\e\m\2\z\j\6\e\y\1\5\s\d\7\1\v\l\3\6\9\z\5\i\r\q\1\c\q\3\u\o\k\9\9\w\d\0\o\f\3\k\z\v\1\y\y\1\3\n\w\2\v\m\9\z\k\z\t\i\0\9\8\8\t\b\l\9\b\x\r\w\4\i\v\x\w\2\w\r\8\y\h\z\u\9\6\i\o\3\t\9\z\n\d\b\8\n\z\s\o\o\0\c\u\v\a\f\1\r\z\2\u\7\i\d\l\3\z\1\j\m\4\j\m\e\d\s\f\9\e\7\7\n\5\8\t\x\1\p\j\e\4\q\l\9\j\j\r\1\x\a\0\2\k\z\k\l\5\d\r\k\w\p\6\0\q\i\n\0\7\3\0\q\z\1\r\j\s\6\d\9\8\x\g\7\b\p\b\j\t\w\x\p\d\h\z\z\h\s\s\y\1\b\7\3\i\p\b\k\s\o\g\0\x\t\i\x\b\w\x\6\2\2\a\4\s\b\s\v\j\v\i\5\k\1\0\k\8\y\h\n\4\h\w\x\6\e\l\t\g\9\a\4\9\4\g\1\g\z\v\h\q\v\1\i\l\7\1\o\2\z\s\r\w\e\2\e\d\s\w\8\m\p\o\j\o\8\x\a\v\f\8\w\s\8\g\2\7\k\t\z\8\5\p\s\l\3\p\w\z\2\o\a\e\m\9\1\f\e\l\p\3\5\o\i\6\t\4\4\f\2\m\5\e\0\1\u\j\9\e\y\a\7\g\z\n\z\t\d\j\6\a\j\d\e\4\n\d\2\2\b\m\b\q\g\k\0\0\g\k\a\c\a\s\5\k\p\i\w\i\v\9\4\x\v\c\m\o\7\6\3\h\s\y\6\z\7\7\p\u\p\2\m\z\g\z\5\y\2\w\2\y\f\l\1\r\6\p\0\5\v\3\w\3\7\a\d\u\4\6\o\d\9\m\f\j\h\5\v\5\2\x\f\h\2\x\f\4\d\0\w\4\k ]] 00:29:26.343 00:48:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:26.343 00:48:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:29:26.343 [2024-04-27 00:48:59.853510] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:26.343 [2024-04-27 00:48:59.853737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142685 ] 00:29:26.601 [2024-04-27 00:49:00.026987] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.859 [2024-04-27 00:49:00.302188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.492  Copying: 512/512 [B] (average 166 kBps) 00:29:28.492 00:29:28.492 ************************************ 00:29:28.492 END TEST dd_flags_misc 00:29:28.492 ************************************ 00:29:28.492 00:49:01 -- dd/posix.sh@93 -- # [[ dkia7jexguqfxrxgmun5zu493hn89vg5llfr7m3qqu2ul3vjs1074e38qykrem2zj6ey15sd71vl369z5irq1cq3uok99wd0of3kzv1yy13nw2vm9zkzti0988tbl9bxrw4ivxw2wr8yhzu96io3t9zndb8nzsoo0cuvaf1rz2u7idl3z1jm4jmedsf9e77n58tx1pje4ql9jjr1xa02kzkl5drkwp60qin0730qz1rjs6d98xg7bpbjtwxpdhzzhssy1b73ipbksog0xtixbwx622a4sbsvjvi5k10k8yhn4hwx6eltg9a494g1gzvhqv1il71o2zsrwe2edsw8mpojo8xavf8ws8g27ktz85psl3pwz2oaem91felp35oi6t44f2m5e01uj9eya7gznztdj6ajde4nd22bmbqgk00gkacas5kpiwiv94xvcmo763hsy6z77pup2mzgz5y2w2yfl1r6p05v3w37adu46od9mfjh5v52xfh2xf4d0w4k == \d\k\i\a\7\j\e\x\g\u\q\f\x\r\x\g\m\u\n\5\z\u\4\9\3\h\n\8\9\v\g\5\l\l\f\r\7\m\3\q\q\u\2\u\l\3\v\j\s\1\0\7\4\e\3\8\q\y\k\r\e\m\2\z\j\6\e\y\1\5\s\d\7\1\v\l\3\6\9\z\5\i\r\q\1\c\q\3\u\o\k\9\9\w\d\0\o\f\3\k\z\v\1\y\y\1\3\n\w\2\v\m\9\z\k\z\t\i\0\9\8\8\t\b\l\9\b\x\r\w\4\i\v\x\w\2\w\r\8\y\h\z\u\9\6\i\o\3\t\9\z\n\d\b\8\n\z\s\o\o\0\c\u\v\a\f\1\r\z\2\u\7\i\d\l\3\z\1\j\m\4\j\m\e\d\s\f\9\e\7\7\n\5\8\t\x\1\p\j\e\4\q\l\9\j\j\r\1\x\a\0\2\k\z\k\l\5\d\r\k\w\p\6\0\q\i\n\0\7\3\0\q\z\1\r\j\s\6\d\9\8\x\g\7\b\p\b\j\t\w\x\p\d\h\z\z\h\s\s\y\1\b\7\3\i\p\b\k\s\o\g\0\x\t\i\x\b\w\x\6\2\2\a\4\s\b\s\v\j\v\i\5\k\1\0\k\8\y\h\n\4\h\w\x\6\e\l\t\g\9\a\4\9\4\g\1\g\z\v\h\q\v\1\i\l\7\1\o\2\z\s\r\w\e\2\e\d\s\w\8\m\p\o\j\o\8\x\a\v\f\8\w\s\8\g\2\7\k\t\z\8\5\p\s\l\3\p\w\z\2\o\a\e\m\9\1\f\e\l\p\3\5\o\i\6\t\4\4\f\2\m\5\e\0\1\u\j\9\e\y\a\7\g\z\n\z\t\d\j\6\a\j\d\e\4\n\d\2\2\b\m\b\q\g\k\0\0\g\k\a\c\a\s\5\k\p\i\w\i\v\9\4\x\v\c\m\o\7\6\3\h\s\y\6\z\7\7\p\u\p\2\m\z\g\z\5\y\2\w\2\y\f\l\1\r\6\p\0\5\v\3\w\3\7\a\d\u\4\6\o\d\9\m\f\j\h\5\v\5\2\x\f\h\2\x\f\4\d\0\w\4\k ]] 00:29:28.492 00:29:28.492 real 0m15.408s 00:29:28.492 user 0m12.383s 00:29:28.492 sys 0m1.939s 00:29:28.492 00:49:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:28.492 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.492 00:49:01 -- dd/posix.sh@131 -- # tests_forced_aio 00:29:28.492 00:49:01 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:29:28.492 * Second test run, using AIO 00:29:28.492 00:49:01 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:29:28.492 00:49:01 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:29:28.492 00:49:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:28.492 00:49:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:28.492 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.492 ************************************ 00:29:28.492 START TEST dd_flag_append_forced_aio 00:29:28.492 ************************************ 00:29:28.492 00:49:01 -- common/autotest_common.sh@1111 -- # append 00:29:28.492 00:49:01 -- dd/posix.sh@16 -- # local dump0 00:29:28.492 00:49:01 -- dd/posix.sh@17 -- # local dump1 00:29:28.492 00:49:01 -- dd/posix.sh@19 -- # gen_bytes 32 00:29:28.492 00:49:01 -- dd/common.sh@98 -- # xtrace_disable 00:29:28.492 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.492 00:49:01 -- dd/posix.sh@19 -- # dump0=i97q7f9koihub3jvwtrc6k8ndd0d8sv1 00:29:28.492 00:49:01 -- dd/posix.sh@20 -- # gen_bytes 32 00:29:28.492 00:49:01 -- dd/common.sh@98 -- # xtrace_disable 00:29:28.492 00:49:01 -- common/autotest_common.sh@10 -- # set +x 00:29:28.492 00:49:01 -- dd/posix.sh@20 -- # dump1=m2z4ltzq45mvf6071uoadmn2f09vhwq5 00:29:28.492 00:49:01 -- dd/posix.sh@22 -- # printf %s i97q7f9koihub3jvwtrc6k8ndd0d8sv1 00:29:28.492 00:49:01 -- dd/posix.sh@23 -- # printf %s m2z4ltzq45mvf6071uoadmn2f09vhwq5 00:29:28.492 00:49:01 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:29:28.492 [2024-04-27 00:49:02.050542] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:28.492 [2024-04-27 00:49:02.050761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142741 ] 00:29:28.750 [2024-04-27 00:49:02.222870] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.008 [2024-04-27 00:49:02.463092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.660  Copying: 32/32 [B] (average 31 kBps) 00:29:30.660 00:29:30.660 00:49:03 -- dd/posix.sh@27 -- # [[ m2z4ltzq45mvf6071uoadmn2f09vhwq5i97q7f9koihub3jvwtrc6k8ndd0d8sv1 == \m\2\z\4\l\t\z\q\4\5\m\v\f\6\0\7\1\u\o\a\d\m\n\2\f\0\9\v\h\w\q\5\i\9\7\q\7\f\9\k\o\i\h\u\b\3\j\v\w\t\r\c\6\k\8\n\d\d\0\d\8\s\v\1 ]] 00:29:30.660 00:29:30.660 real 0m1.996s 00:29:30.660 user 0m1.585s 00:29:30.660 sys 0m0.280s 00:29:30.660 ************************************ 00:29:30.660 END TEST dd_flag_append_forced_aio 00:29:30.660 ************************************ 00:29:30.660 00:49:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:30.660 00:49:03 -- common/autotest_common.sh@10 -- # set +x 00:29:30.660 00:49:04 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:29:30.660 00:49:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:30.660 00:49:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.660 00:49:04 -- common/autotest_common.sh@10 -- # set +x 00:29:30.660 ************************************ 00:29:30.660 START TEST dd_flag_directory_forced_aio 00:29:30.660 ************************************ 00:29:30.660 00:49:04 -- common/autotest_common.sh@1111 -- # directory 00:29:30.660 00:49:04 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:30.660 00:49:04 -- common/autotest_common.sh@638 -- # local es=0 00:29:30.660 00:49:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:30.660 00:49:04 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:30.660 00:49:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:30.660 00:49:04 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:30.660 00:49:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:30.660 00:49:04 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:30.660 00:49:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:30.660 00:49:04 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:30.660 00:49:04 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:30.660 00:49:04 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:30.660 [2024-04-27 00:49:04.136751] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:30.660 [2024-04-27 00:49:04.137231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142799 ] 00:29:30.918 [2024-04-27 00:49:04.310625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.177 [2024-04-27 00:49:04.553024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.435 [2024-04-27 00:49:04.881596] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:31.435 [2024-04-27 00:49:04.882017] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:31.435 [2024-04-27 00:49:04.882188] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:32.370 [2024-04-27 00:49:05.625628] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:32.629 00:49:06 -- common/autotest_common.sh@641 -- # es=236 00:29:32.629 00:49:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:32.629 00:49:06 -- common/autotest_common.sh@650 -- # es=108 00:29:32.629 00:49:06 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:32.629 00:49:06 -- common/autotest_common.sh@658 -- # es=1 00:29:32.629 00:49:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:32.629 00:49:06 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:32.629 00:49:06 -- common/autotest_common.sh@638 -- # local es=0 00:29:32.629 00:49:06 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:32.629 00:49:06 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:32.629 00:49:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:32.629 00:49:06 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:32.629 00:49:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:32.629 00:49:06 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:32.629 00:49:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:32.629 00:49:06 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:32.629 00:49:06 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:32.629 00:49:06 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:29:32.629 [2024-04-27 00:49:06.135017] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:32.629 [2024-04-27 00:49:06.135775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142826 ] 00:29:32.888 [2024-04-27 00:49:06.314474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.147 [2024-04-27 00:49:06.547722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.406 [2024-04-27 00:49:06.913421] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:33.406 [2024-04-27 00:49:06.913756] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:29:33.406 [2024-04-27 00:49:06.913845] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:34.355 [2024-04-27 00:49:07.651219] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:34.626 ************************************ 00:29:34.626 END TEST dd_flag_directory_forced_aio 00:29:34.626 ************************************ 00:29:34.626 00:49:08 -- common/autotest_common.sh@641 -- # es=236 00:29:34.626 00:49:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:34.626 00:49:08 -- common/autotest_common.sh@650 -- # es=108 00:29:34.626 00:49:08 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:34.626 00:49:08 -- common/autotest_common.sh@658 -- # es=1 00:29:34.626 00:49:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:34.626 00:29:34.626 real 0m3.998s 00:29:34.626 user 0m3.236s 00:29:34.626 sys 0m0.557s 00:29:34.626 00:49:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:34.626 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:29:34.626 00:49:08 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:29:34.626 00:49:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:34.626 00:49:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:34.626 00:49:08 -- common/autotest_common.sh@10 -- # set +x 00:29:34.626 ************************************ 00:29:34.626 START TEST dd_flag_nofollow_forced_aio 00:29:34.626 ************************************ 00:29:34.626 00:49:08 -- common/autotest_common.sh@1111 -- # nofollow 00:29:34.626 00:49:08 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:34.626 00:49:08 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:34.626 00:49:08 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:29:34.626 00:49:08 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:29:34.626 00:49:08 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:34.626 00:49:08 -- common/autotest_common.sh@638 -- # local es=0 00:29:34.626 00:49:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:34.626 00:49:08 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:34.626 00:49:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:34.626 00:49:08 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:34.626 00:49:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:34.626 00:49:08 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:34.626 00:49:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:34.626 00:49:08 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:34.626 00:49:08 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:34.626 00:49:08 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:34.885 [2024-04-27 00:49:08.236783] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:34.885 [2024-04-27 00:49:08.236987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142881 ] 00:29:34.885 [2024-04-27 00:49:08.408072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.145 [2024-04-27 00:49:08.669794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.404 [2024-04-27 00:49:08.984822] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:29:35.404 [2024-04-27 00:49:08.985136] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:29:35.404 [2024-04-27 00:49:08.985225] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:36.340 [2024-04-27 00:49:09.715574] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:36.600 00:49:10 -- common/autotest_common.sh@641 -- # es=216 00:29:36.600 00:49:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:36.600 00:49:10 -- common/autotest_common.sh@650 -- # es=88 00:29:36.600 00:49:10 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:36.600 00:49:10 -- common/autotest_common.sh@658 -- # es=1 00:29:36.600 00:49:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:36.600 00:49:10 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:36.600 00:49:10 -- common/autotest_common.sh@638 -- # local es=0 00:29:36.600 00:49:10 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:36.600 00:49:10 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.600 00:49:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.600 00:49:10 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.600 00:49:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.600 00:49:10 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.600 00:49:10 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:36.600 00:49:10 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:36.600 00:49:10 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:36.600 00:49:10 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:29:36.859 [2024-04-27 00:49:10.208705] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:36.859 [2024-04-27 00:49:10.208946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142908 ] 00:29:36.859 [2024-04-27 00:49:10.379585] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.117 [2024-04-27 00:49:10.596279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.377 [2024-04-27 00:49:10.910201] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:29:37.377 [2024-04-27 00:49:10.910304] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:29:37.377 [2024-04-27 00:49:10.910338] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:38.312 [2024-04-27 00:49:11.652596] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:29:38.570 00:49:12 -- common/autotest_common.sh@641 -- # es=216 00:29:38.570 00:49:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:38.570 00:49:12 -- common/autotest_common.sh@650 -- # es=88 00:29:38.570 00:49:12 -- common/autotest_common.sh@651 -- # case "$es" in 00:29:38.570 00:49:12 -- common/autotest_common.sh@658 -- # es=1 00:29:38.570 00:49:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:38.570 00:49:12 -- dd/posix.sh@46 -- # gen_bytes 512 00:29:38.570 00:49:12 -- dd/common.sh@98 -- # xtrace_disable 00:29:38.570 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.570 00:49:12 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:38.570 [2024-04-27 00:49:12.130340] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:38.570 [2024-04-27 00:49:12.130610] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142932 ] 00:29:38.829 [2024-04-27 00:49:12.300647] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.088 [2024-04-27 00:49:12.507763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.720  Copying: 512/512 [B] (average 500 kBps) 00:29:40.720 00:29:40.720 00:49:13 -- dd/posix.sh@49 -- # [[ 62hc55wbamkardqvxqlbe9az9a887ya2lmdzyp4lx06p89qbh8ru95qouaujeyps4t20ia4qxwyrgzrwcoze51lrmgavmmn7gu4z4ejar48ost1k2n735d5kry2qxprokmjblnrkfw3tmf2hs9zq3dvbu6mxwkrxmnvw1su6v6293s8lgi6can2vr69rhk5vcufo89b138xn92l8v1to1b7iw6oa024rwtamsur3gxikfhk823bxppym2ndl40ibcvslu5209urholfvugl5hv0eskj8h4cjdqaztlhvmb6bk1yz5gf7xctckfjt9jr7imiecp25lmjog8itl80a3nmvaev68nnycsqn07s0w2ucmhwl5j4a2jmt9bg9acaukqpxddytuhpxmg5nuw1y5t32jlggrbvme878p9oc2ibf8wl86e7ltnze4pme0iulb5icup2urulwa86rz2dwp1tzhgwocrrbbk6ldzpre1nq81rholsx1s0vxx7lnleh == \6\2\h\c\5\5\w\b\a\m\k\a\r\d\q\v\x\q\l\b\e\9\a\z\9\a\8\8\7\y\a\2\l\m\d\z\y\p\4\l\x\0\6\p\8\9\q\b\h\8\r\u\9\5\q\o\u\a\u\j\e\y\p\s\4\t\2\0\i\a\4\q\x\w\y\r\g\z\r\w\c\o\z\e\5\1\l\r\m\g\a\v\m\m\n\7\g\u\4\z\4\e\j\a\r\4\8\o\s\t\1\k\2\n\7\3\5\d\5\k\r\y\2\q\x\p\r\o\k\m\j\b\l\n\r\k\f\w\3\t\m\f\2\h\s\9\z\q\3\d\v\b\u\6\m\x\w\k\r\x\m\n\v\w\1\s\u\6\v\6\2\9\3\s\8\l\g\i\6\c\a\n\2\v\r\6\9\r\h\k\5\v\c\u\f\o\8\9\b\1\3\8\x\n\9\2\l\8\v\1\t\o\1\b\7\i\w\6\o\a\0\2\4\r\w\t\a\m\s\u\r\3\g\x\i\k\f\h\k\8\2\3\b\x\p\p\y\m\2\n\d\l\4\0\i\b\c\v\s\l\u\5\2\0\9\u\r\h\o\l\f\v\u\g\l\5\h\v\0\e\s\k\j\8\h\4\c\j\d\q\a\z\t\l\h\v\m\b\6\b\k\1\y\z\5\g\f\7\x\c\t\c\k\f\j\t\9\j\r\7\i\m\i\e\c\p\2\5\l\m\j\o\g\8\i\t\l\8\0\a\3\n\m\v\a\e\v\6\8\n\n\y\c\s\q\n\0\7\s\0\w\2\u\c\m\h\w\l\5\j\4\a\2\j\m\t\9\b\g\9\a\c\a\u\k\q\p\x\d\d\y\t\u\h\p\x\m\g\5\n\u\w\1\y\5\t\3\2\j\l\g\g\r\b\v\m\e\8\7\8\p\9\o\c\2\i\b\f\8\w\l\8\6\e\7\l\t\n\z\e\4\p\m\e\0\i\u\l\b\5\i\c\u\p\2\u\r\u\l\w\a\8\6\r\z\2\d\w\p\1\t\z\h\g\w\o\c\r\r\b\b\k\6\l\d\z\p\r\e\1\n\q\8\1\r\h\o\l\s\x\1\s\0\v\x\x\7\l\n\l\e\h ]] 00:29:40.720 00:29:40.720 real 0m5.791s 00:29:40.720 user 0m4.658s 00:29:40.720 sys 0m0.775s 00:29:40.720 00:49:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:40.720 00:49:13 -- common/autotest_common.sh@10 -- # set +x 00:29:40.720 ************************************ 00:29:40.720 END TEST dd_flag_nofollow_forced_aio 00:29:40.720 ************************************ 00:29:40.720 00:49:13 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:29:40.720 00:49:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:40.720 00:49:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:40.720 00:49:13 -- common/autotest_common.sh@10 -- # set +x 00:29:40.720 ************************************ 00:29:40.720 START TEST dd_flag_noatime_forced_aio 00:29:40.720 ************************************ 00:29:40.720 00:49:14 -- common/autotest_common.sh@1111 -- # noatime 00:29:40.720 00:49:14 -- dd/posix.sh@53 -- # local atime_if 00:29:40.720 00:49:14 -- dd/posix.sh@54 -- # local atime_of 00:29:40.720 00:49:14 -- dd/posix.sh@58 -- # gen_bytes 512 00:29:40.720 00:49:14 -- dd/common.sh@98 -- # xtrace_disable 00:29:40.720 00:49:14 -- common/autotest_common.sh@10 -- # set +x 00:29:40.720 00:49:14 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:40.720 00:49:14 -- dd/posix.sh@60 -- # atime_if=1714178952 00:29:40.720 00:49:14 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:40.720 00:49:14 -- dd/posix.sh@61 -- # atime_of=1714178953 00:29:40.720 00:49:14 -- dd/posix.sh@66 -- # sleep 1 00:29:41.714 00:49:15 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:41.714 [2024-04-27 00:49:15.100079] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:41.714 [2024-04-27 00:49:15.100304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143004 ] 00:29:41.714 [2024-04-27 00:49:15.273297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.972 [2024-04-27 00:49:15.507002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.606  Copying: 512/512 [B] (average 500 kBps) 00:29:43.606 00:29:43.606 00:49:16 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:43.606 00:49:16 -- dd/posix.sh@69 -- # (( atime_if == 1714178952 )) 00:29:43.607 00:49:16 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:43.607 00:49:16 -- dd/posix.sh@70 -- # (( atime_of == 1714178953 )) 00:29:43.607 00:49:16 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:43.607 [2024-04-27 00:49:16.894756] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:43.607 [2024-04-27 00:49:16.894918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143031 ] 00:29:43.607 [2024-04-27 00:49:17.056796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.866 [2024-04-27 00:49:17.224084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.061  Copying: 512/512 [B] (average 500 kBps) 00:29:45.061 00:29:45.061 00:49:18 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:45.061 00:49:18 -- dd/posix.sh@73 -- # (( atime_if < 1714178957 )) 00:29:45.061 00:29:45.061 real 0m4.613s 00:29:45.061 user 0m2.882s 00:29:45.061 sys 0m0.465s 00:29:45.061 00:49:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:45.061 ************************************ 00:29:45.061 END TEST dd_flag_noatime_forced_aio 00:29:45.061 ************************************ 00:29:45.061 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.320 00:49:18 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:29:45.320 00:49:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:45.320 00:49:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:45.320 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.320 ************************************ 00:29:45.320 START TEST dd_flags_misc_forced_aio 00:29:45.320 ************************************ 00:29:45.320 00:49:18 -- common/autotest_common.sh@1111 -- # io 00:29:45.320 00:49:18 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:29:45.320 00:49:18 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:29:45.321 00:49:18 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:29:45.321 00:49:18 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:29:45.321 00:49:18 -- dd/posix.sh@86 -- # gen_bytes 512 00:29:45.321 00:49:18 -- dd/common.sh@98 -- # xtrace_disable 00:29:45.321 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.321 00:49:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:45.321 00:49:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:29:45.321 [2024-04-27 00:49:18.775701] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:45.321 [2024-04-27 00:49:18.775950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143078 ] 00:29:45.579 [2024-04-27 00:49:18.943023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.579 [2024-04-27 00:49:19.166582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.131  Copying: 512/512 [B] (average 500 kBps) 00:29:47.131 00:29:47.131 00:49:20 -- dd/posix.sh@93 -- # [[ 3plwt3a6wjhygf8g3n12rfxrpa5if6bm87fo7ueu3m2u09k2dq06irm7c74heqmjii8uoe597jr48yygw4swh2flj18jrpe3may1j0ar1a70v9w2ov66e9dmmnzxljz25gl571w4yp7dtwafqcsrx9s8lx9wlm5ckd360rhztywehuhy0wcivxyw5emqnhdhmbppxcpc1grdxu91hjrsg99nfubbmgrnnz7ztck31d34qpnd6t0kyv7brgmilv4ybzpkd6tj6jgtfw4s123gapm1lmwavluxzjbf32ks90gp8kqhq018mnn47erkicae6d98s9jedgi0xcuhwsi4gzi61pk1g7kyiwbu9b0dfrqw13o19ueao28u8wfhv9s6tq841ailthvhua82kxshqc8vuldc2321e6vyvsq2oj37jlvhygupftx1t9oygtndbfqzviht6imq5p73rscbsmmhnokqna2eclpuba0mshzj2p55u2w7r0l72qwsahgm == \3\p\l\w\t\3\a\6\w\j\h\y\g\f\8\g\3\n\1\2\r\f\x\r\p\a\5\i\f\6\b\m\8\7\f\o\7\u\e\u\3\m\2\u\0\9\k\2\d\q\0\6\i\r\m\7\c\7\4\h\e\q\m\j\i\i\8\u\o\e\5\9\7\j\r\4\8\y\y\g\w\4\s\w\h\2\f\l\j\1\8\j\r\p\e\3\m\a\y\1\j\0\a\r\1\a\7\0\v\9\w\2\o\v\6\6\e\9\d\m\m\n\z\x\l\j\z\2\5\g\l\5\7\1\w\4\y\p\7\d\t\w\a\f\q\c\s\r\x\9\s\8\l\x\9\w\l\m\5\c\k\d\3\6\0\r\h\z\t\y\w\e\h\u\h\y\0\w\c\i\v\x\y\w\5\e\m\q\n\h\d\h\m\b\p\p\x\c\p\c\1\g\r\d\x\u\9\1\h\j\r\s\g\9\9\n\f\u\b\b\m\g\r\n\n\z\7\z\t\c\k\3\1\d\3\4\q\p\n\d\6\t\0\k\y\v\7\b\r\g\m\i\l\v\4\y\b\z\p\k\d\6\t\j\6\j\g\t\f\w\4\s\1\2\3\g\a\p\m\1\l\m\w\a\v\l\u\x\z\j\b\f\3\2\k\s\9\0\g\p\8\k\q\h\q\0\1\8\m\n\n\4\7\e\r\k\i\c\a\e\6\d\9\8\s\9\j\e\d\g\i\0\x\c\u\h\w\s\i\4\g\z\i\6\1\p\k\1\g\7\k\y\i\w\b\u\9\b\0\d\f\r\q\w\1\3\o\1\9\u\e\a\o\2\8\u\8\w\f\h\v\9\s\6\t\q\8\4\1\a\i\l\t\h\v\h\u\a\8\2\k\x\s\h\q\c\8\v\u\l\d\c\2\3\2\1\e\6\v\y\v\s\q\2\o\j\3\7\j\l\v\h\y\g\u\p\f\t\x\1\t\9\o\y\g\t\n\d\b\f\q\z\v\i\h\t\6\i\m\q\5\p\7\3\r\s\c\b\s\m\m\h\n\o\k\q\n\a\2\e\c\l\p\u\b\a\0\m\s\h\z\j\2\p\5\5\u\2\w\7\r\0\l\7\2\q\w\s\a\h\g\m ]] 00:29:47.131 00:49:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:47.131 00:49:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:29:47.131 [2024-04-27 00:49:20.623312] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:47.131 [2024-04-27 00:49:20.623679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143110 ] 00:29:47.389 [2024-04-27 00:49:20.797022] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.647 [2024-04-27 00:49:21.038855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.282  Copying: 512/512 [B] (average 500 kBps) 00:29:49.282 00:29:49.283 00:49:22 -- dd/posix.sh@93 -- # [[ 3plwt3a6wjhygf8g3n12rfxrpa5if6bm87fo7ueu3m2u09k2dq06irm7c74heqmjii8uoe597jr48yygw4swh2flj18jrpe3may1j0ar1a70v9w2ov66e9dmmnzxljz25gl571w4yp7dtwafqcsrx9s8lx9wlm5ckd360rhztywehuhy0wcivxyw5emqnhdhmbppxcpc1grdxu91hjrsg99nfubbmgrnnz7ztck31d34qpnd6t0kyv7brgmilv4ybzpkd6tj6jgtfw4s123gapm1lmwavluxzjbf32ks90gp8kqhq018mnn47erkicae6d98s9jedgi0xcuhwsi4gzi61pk1g7kyiwbu9b0dfrqw13o19ueao28u8wfhv9s6tq841ailthvhua82kxshqc8vuldc2321e6vyvsq2oj37jlvhygupftx1t9oygtndbfqzviht6imq5p73rscbsmmhnokqna2eclpuba0mshzj2p55u2w7r0l72qwsahgm == \3\p\l\w\t\3\a\6\w\j\h\y\g\f\8\g\3\n\1\2\r\f\x\r\p\a\5\i\f\6\b\m\8\7\f\o\7\u\e\u\3\m\2\u\0\9\k\2\d\q\0\6\i\r\m\7\c\7\4\h\e\q\m\j\i\i\8\u\o\e\5\9\7\j\r\4\8\y\y\g\w\4\s\w\h\2\f\l\j\1\8\j\r\p\e\3\m\a\y\1\j\0\a\r\1\a\7\0\v\9\w\2\o\v\6\6\e\9\d\m\m\n\z\x\l\j\z\2\5\g\l\5\7\1\w\4\y\p\7\d\t\w\a\f\q\c\s\r\x\9\s\8\l\x\9\w\l\m\5\c\k\d\3\6\0\r\h\z\t\y\w\e\h\u\h\y\0\w\c\i\v\x\y\w\5\e\m\q\n\h\d\h\m\b\p\p\x\c\p\c\1\g\r\d\x\u\9\1\h\j\r\s\g\9\9\n\f\u\b\b\m\g\r\n\n\z\7\z\t\c\k\3\1\d\3\4\q\p\n\d\6\t\0\k\y\v\7\b\r\g\m\i\l\v\4\y\b\z\p\k\d\6\t\j\6\j\g\t\f\w\4\s\1\2\3\g\a\p\m\1\l\m\w\a\v\l\u\x\z\j\b\f\3\2\k\s\9\0\g\p\8\k\q\h\q\0\1\8\m\n\n\4\7\e\r\k\i\c\a\e\6\d\9\8\s\9\j\e\d\g\i\0\x\c\u\h\w\s\i\4\g\z\i\6\1\p\k\1\g\7\k\y\i\w\b\u\9\b\0\d\f\r\q\w\1\3\o\1\9\u\e\a\o\2\8\u\8\w\f\h\v\9\s\6\t\q\8\4\1\a\i\l\t\h\v\h\u\a\8\2\k\x\s\h\q\c\8\v\u\l\d\c\2\3\2\1\e\6\v\y\v\s\q\2\o\j\3\7\j\l\v\h\y\g\u\p\f\t\x\1\t\9\o\y\g\t\n\d\b\f\q\z\v\i\h\t\6\i\m\q\5\p\7\3\r\s\c\b\s\m\m\h\n\o\k\q\n\a\2\e\c\l\p\u\b\a\0\m\s\h\z\j\2\p\5\5\u\2\w\7\r\0\l\7\2\q\w\s\a\h\g\m ]] 00:29:49.283 00:49:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:49.283 00:49:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:29:49.283 [2024-04-27 00:49:22.561651] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:49.283 [2024-04-27 00:49:22.561860] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143130 ] 00:29:49.283 [2024-04-27 00:49:22.729874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.546 [2024-04-27 00:49:22.923655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.737  Copying: 512/512 [B] (average 250 kBps) 00:29:50.737 00:29:50.738 00:49:24 -- dd/posix.sh@93 -- # [[ 3plwt3a6wjhygf8g3n12rfxrpa5if6bm87fo7ueu3m2u09k2dq06irm7c74heqmjii8uoe597jr48yygw4swh2flj18jrpe3may1j0ar1a70v9w2ov66e9dmmnzxljz25gl571w4yp7dtwafqcsrx9s8lx9wlm5ckd360rhztywehuhy0wcivxyw5emqnhdhmbppxcpc1grdxu91hjrsg99nfubbmgrnnz7ztck31d34qpnd6t0kyv7brgmilv4ybzpkd6tj6jgtfw4s123gapm1lmwavluxzjbf32ks90gp8kqhq018mnn47erkicae6d98s9jedgi0xcuhwsi4gzi61pk1g7kyiwbu9b0dfrqw13o19ueao28u8wfhv9s6tq841ailthvhua82kxshqc8vuldc2321e6vyvsq2oj37jlvhygupftx1t9oygtndbfqzviht6imq5p73rscbsmmhnokqna2eclpuba0mshzj2p55u2w7r0l72qwsahgm == \3\p\l\w\t\3\a\6\w\j\h\y\g\f\8\g\3\n\1\2\r\f\x\r\p\a\5\i\f\6\b\m\8\7\f\o\7\u\e\u\3\m\2\u\0\9\k\2\d\q\0\6\i\r\m\7\c\7\4\h\e\q\m\j\i\i\8\u\o\e\5\9\7\j\r\4\8\y\y\g\w\4\s\w\h\2\f\l\j\1\8\j\r\p\e\3\m\a\y\1\j\0\a\r\1\a\7\0\v\9\w\2\o\v\6\6\e\9\d\m\m\n\z\x\l\j\z\2\5\g\l\5\7\1\w\4\y\p\7\d\t\w\a\f\q\c\s\r\x\9\s\8\l\x\9\w\l\m\5\c\k\d\3\6\0\r\h\z\t\y\w\e\h\u\h\y\0\w\c\i\v\x\y\w\5\e\m\q\n\h\d\h\m\b\p\p\x\c\p\c\1\g\r\d\x\u\9\1\h\j\r\s\g\9\9\n\f\u\b\b\m\g\r\n\n\z\7\z\t\c\k\3\1\d\3\4\q\p\n\d\6\t\0\k\y\v\7\b\r\g\m\i\l\v\4\y\b\z\p\k\d\6\t\j\6\j\g\t\f\w\4\s\1\2\3\g\a\p\m\1\l\m\w\a\v\l\u\x\z\j\b\f\3\2\k\s\9\0\g\p\8\k\q\h\q\0\1\8\m\n\n\4\7\e\r\k\i\c\a\e\6\d\9\8\s\9\j\e\d\g\i\0\x\c\u\h\w\s\i\4\g\z\i\6\1\p\k\1\g\7\k\y\i\w\b\u\9\b\0\d\f\r\q\w\1\3\o\1\9\u\e\a\o\2\8\u\8\w\f\h\v\9\s\6\t\q\8\4\1\a\i\l\t\h\v\h\u\a\8\2\k\x\s\h\q\c\8\v\u\l\d\c\2\3\2\1\e\6\v\y\v\s\q\2\o\j\3\7\j\l\v\h\y\g\u\p\f\t\x\1\t\9\o\y\g\t\n\d\b\f\q\z\v\i\h\t\6\i\m\q\5\p\7\3\r\s\c\b\s\m\m\h\n\o\k\q\n\a\2\e\c\l\p\u\b\a\0\m\s\h\z\j\2\p\5\5\u\2\w\7\r\0\l\7\2\q\w\s\a\h\g\m ]] 00:29:50.738 00:49:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:50.738 00:49:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:29:50.738 [2024-04-27 00:49:24.298205] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:50.738 [2024-04-27 00:49:24.298427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143154 ] 00:29:50.995 [2024-04-27 00:49:24.464516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.252 [2024-04-27 00:49:24.666347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.883  Copying: 512/512 [B] (average 166 kBps) 00:29:52.883 00:29:52.883 00:49:26 -- dd/posix.sh@93 -- # [[ 3plwt3a6wjhygf8g3n12rfxrpa5if6bm87fo7ueu3m2u09k2dq06irm7c74heqmjii8uoe597jr48yygw4swh2flj18jrpe3may1j0ar1a70v9w2ov66e9dmmnzxljz25gl571w4yp7dtwafqcsrx9s8lx9wlm5ckd360rhztywehuhy0wcivxyw5emqnhdhmbppxcpc1grdxu91hjrsg99nfubbmgrnnz7ztck31d34qpnd6t0kyv7brgmilv4ybzpkd6tj6jgtfw4s123gapm1lmwavluxzjbf32ks90gp8kqhq018mnn47erkicae6d98s9jedgi0xcuhwsi4gzi61pk1g7kyiwbu9b0dfrqw13o19ueao28u8wfhv9s6tq841ailthvhua82kxshqc8vuldc2321e6vyvsq2oj37jlvhygupftx1t9oygtndbfqzviht6imq5p73rscbsmmhnokqna2eclpuba0mshzj2p55u2w7r0l72qwsahgm == \3\p\l\w\t\3\a\6\w\j\h\y\g\f\8\g\3\n\1\2\r\f\x\r\p\a\5\i\f\6\b\m\8\7\f\o\7\u\e\u\3\m\2\u\0\9\k\2\d\q\0\6\i\r\m\7\c\7\4\h\e\q\m\j\i\i\8\u\o\e\5\9\7\j\r\4\8\y\y\g\w\4\s\w\h\2\f\l\j\1\8\j\r\p\e\3\m\a\y\1\j\0\a\r\1\a\7\0\v\9\w\2\o\v\6\6\e\9\d\m\m\n\z\x\l\j\z\2\5\g\l\5\7\1\w\4\y\p\7\d\t\w\a\f\q\c\s\r\x\9\s\8\l\x\9\w\l\m\5\c\k\d\3\6\0\r\h\z\t\y\w\e\h\u\h\y\0\w\c\i\v\x\y\w\5\e\m\q\n\h\d\h\m\b\p\p\x\c\p\c\1\g\r\d\x\u\9\1\h\j\r\s\g\9\9\n\f\u\b\b\m\g\r\n\n\z\7\z\t\c\k\3\1\d\3\4\q\p\n\d\6\t\0\k\y\v\7\b\r\g\m\i\l\v\4\y\b\z\p\k\d\6\t\j\6\j\g\t\f\w\4\s\1\2\3\g\a\p\m\1\l\m\w\a\v\l\u\x\z\j\b\f\3\2\k\s\9\0\g\p\8\k\q\h\q\0\1\8\m\n\n\4\7\e\r\k\i\c\a\e\6\d\9\8\s\9\j\e\d\g\i\0\x\c\u\h\w\s\i\4\g\z\i\6\1\p\k\1\g\7\k\y\i\w\b\u\9\b\0\d\f\r\q\w\1\3\o\1\9\u\e\a\o\2\8\u\8\w\f\h\v\9\s\6\t\q\8\4\1\a\i\l\t\h\v\h\u\a\8\2\k\x\s\h\q\c\8\v\u\l\d\c\2\3\2\1\e\6\v\y\v\s\q\2\o\j\3\7\j\l\v\h\y\g\u\p\f\t\x\1\t\9\o\y\g\t\n\d\b\f\q\z\v\i\h\t\6\i\m\q\5\p\7\3\r\s\c\b\s\m\m\h\n\o\k\q\n\a\2\e\c\l\p\u\b\a\0\m\s\h\z\j\2\p\5\5\u\2\w\7\r\0\l\7\2\q\w\s\a\h\g\m ]] 00:29:52.883 00:49:26 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:29:52.883 00:49:26 -- dd/posix.sh@86 -- # gen_bytes 512 00:29:52.883 00:49:26 -- dd/common.sh@98 -- # xtrace_disable 00:29:52.883 00:49:26 -- common/autotest_common.sh@10 -- # set +x 00:29:52.883 00:49:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:52.883 00:49:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:29:52.883 [2024-04-27 00:49:26.154594] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:52.883 [2024-04-27 00:49:26.154906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143182 ] 00:29:52.883 [2024-04-27 00:49:26.324986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.141 [2024-04-27 00:49:26.498077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.357  Copying: 512/512 [B] (average 500 kBps) 00:29:54.357 00:29:54.357 00:49:27 -- dd/posix.sh@93 -- # [[ e92iggne9gaxfbrias8s0ewjqfuigtyzhv7wbojuj2m6pe4pjy8ine1jqmf4koaz8p6rol2fd92ujmi9u40d158bk4nt7eyhnv47ng39a5ylkyusww0pz1ikw802y02f4zers2snva2z28ucsxz8kzu3jlut2asnoejym3y7o0lkbgq71qfvbo3gplizp9ofuucgfo81re876tc76ysu4gg51ngpyaf60f6pn8d0ffv3sj9lmaqyl4ls59nxin22hzs24d2wps56uh9we7ow9ifd3a18p3oa1a3kw0gfw08bj737j64qp1q8fkiyn5gd2pf4a3ke65qw5kmyiyghi49evpnr5wt6trjl5w4kch5c0w4k3f32qhulj5y1r2zz0lk2pyyxvblldx1fmemzd16jcldkzo84liaj8w08woc18u4tz3vmk42d9t6868fy8kddrz1z9ztoe34vt54i596y2t4r9i7wdi9zn475g7haookgd5o9dztpwnbfu9n6 == \e\9\2\i\g\g\n\e\9\g\a\x\f\b\r\i\a\s\8\s\0\e\w\j\q\f\u\i\g\t\y\z\h\v\7\w\b\o\j\u\j\2\m\6\p\e\4\p\j\y\8\i\n\e\1\j\q\m\f\4\k\o\a\z\8\p\6\r\o\l\2\f\d\9\2\u\j\m\i\9\u\4\0\d\1\5\8\b\k\4\n\t\7\e\y\h\n\v\4\7\n\g\3\9\a\5\y\l\k\y\u\s\w\w\0\p\z\1\i\k\w\8\0\2\y\0\2\f\4\z\e\r\s\2\s\n\v\a\2\z\2\8\u\c\s\x\z\8\k\z\u\3\j\l\u\t\2\a\s\n\o\e\j\y\m\3\y\7\o\0\l\k\b\g\q\7\1\q\f\v\b\o\3\g\p\l\i\z\p\9\o\f\u\u\c\g\f\o\8\1\r\e\8\7\6\t\c\7\6\y\s\u\4\g\g\5\1\n\g\p\y\a\f\6\0\f\6\p\n\8\d\0\f\f\v\3\s\j\9\l\m\a\q\y\l\4\l\s\5\9\n\x\i\n\2\2\h\z\s\2\4\d\2\w\p\s\5\6\u\h\9\w\e\7\o\w\9\i\f\d\3\a\1\8\p\3\o\a\1\a\3\k\w\0\g\f\w\0\8\b\j\7\3\7\j\6\4\q\p\1\q\8\f\k\i\y\n\5\g\d\2\p\f\4\a\3\k\e\6\5\q\w\5\k\m\y\i\y\g\h\i\4\9\e\v\p\n\r\5\w\t\6\t\r\j\l\5\w\4\k\c\h\5\c\0\w\4\k\3\f\3\2\q\h\u\l\j\5\y\1\r\2\z\z\0\l\k\2\p\y\y\x\v\b\l\l\d\x\1\f\m\e\m\z\d\1\6\j\c\l\d\k\z\o\8\4\l\i\a\j\8\w\0\8\w\o\c\1\8\u\4\t\z\3\v\m\k\4\2\d\9\t\6\8\6\8\f\y\8\k\d\d\r\z\1\z\9\z\t\o\e\3\4\v\t\5\4\i\5\9\6\y\2\t\4\r\9\i\7\w\d\i\9\z\n\4\7\5\g\7\h\a\o\o\k\g\d\5\o\9\d\z\t\p\w\n\b\f\u\9\n\6 ]] 00:29:54.357 00:49:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:54.357 00:49:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:29:54.357 [2024-04-27 00:49:27.935575] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:54.357 [2024-04-27 00:49:27.936436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143207 ] 00:29:54.616 [2024-04-27 00:49:28.108833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.874 [2024-04-27 00:49:28.322368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.518  Copying: 512/512 [B] (average 500 kBps) 00:29:56.518 00:29:56.519 00:49:29 -- dd/posix.sh@93 -- # [[ e92iggne9gaxfbrias8s0ewjqfuigtyzhv7wbojuj2m6pe4pjy8ine1jqmf4koaz8p6rol2fd92ujmi9u40d158bk4nt7eyhnv47ng39a5ylkyusww0pz1ikw802y02f4zers2snva2z28ucsxz8kzu3jlut2asnoejym3y7o0lkbgq71qfvbo3gplizp9ofuucgfo81re876tc76ysu4gg51ngpyaf60f6pn8d0ffv3sj9lmaqyl4ls59nxin22hzs24d2wps56uh9we7ow9ifd3a18p3oa1a3kw0gfw08bj737j64qp1q8fkiyn5gd2pf4a3ke65qw5kmyiyghi49evpnr5wt6trjl5w4kch5c0w4k3f32qhulj5y1r2zz0lk2pyyxvblldx1fmemzd16jcldkzo84liaj8w08woc18u4tz3vmk42d9t6868fy8kddrz1z9ztoe34vt54i596y2t4r9i7wdi9zn475g7haookgd5o9dztpwnbfu9n6 == \e\9\2\i\g\g\n\e\9\g\a\x\f\b\r\i\a\s\8\s\0\e\w\j\q\f\u\i\g\t\y\z\h\v\7\w\b\o\j\u\j\2\m\6\p\e\4\p\j\y\8\i\n\e\1\j\q\m\f\4\k\o\a\z\8\p\6\r\o\l\2\f\d\9\2\u\j\m\i\9\u\4\0\d\1\5\8\b\k\4\n\t\7\e\y\h\n\v\4\7\n\g\3\9\a\5\y\l\k\y\u\s\w\w\0\p\z\1\i\k\w\8\0\2\y\0\2\f\4\z\e\r\s\2\s\n\v\a\2\z\2\8\u\c\s\x\z\8\k\z\u\3\j\l\u\t\2\a\s\n\o\e\j\y\m\3\y\7\o\0\l\k\b\g\q\7\1\q\f\v\b\o\3\g\p\l\i\z\p\9\o\f\u\u\c\g\f\o\8\1\r\e\8\7\6\t\c\7\6\y\s\u\4\g\g\5\1\n\g\p\y\a\f\6\0\f\6\p\n\8\d\0\f\f\v\3\s\j\9\l\m\a\q\y\l\4\l\s\5\9\n\x\i\n\2\2\h\z\s\2\4\d\2\w\p\s\5\6\u\h\9\w\e\7\o\w\9\i\f\d\3\a\1\8\p\3\o\a\1\a\3\k\w\0\g\f\w\0\8\b\j\7\3\7\j\6\4\q\p\1\q\8\f\k\i\y\n\5\g\d\2\p\f\4\a\3\k\e\6\5\q\w\5\k\m\y\i\y\g\h\i\4\9\e\v\p\n\r\5\w\t\6\t\r\j\l\5\w\4\k\c\h\5\c\0\w\4\k\3\f\3\2\q\h\u\l\j\5\y\1\r\2\z\z\0\l\k\2\p\y\y\x\v\b\l\l\d\x\1\f\m\e\m\z\d\1\6\j\c\l\d\k\z\o\8\4\l\i\a\j\8\w\0\8\w\o\c\1\8\u\4\t\z\3\v\m\k\4\2\d\9\t\6\8\6\8\f\y\8\k\d\d\r\z\1\z\9\z\t\o\e\3\4\v\t\5\4\i\5\9\6\y\2\t\4\r\9\i\7\w\d\i\9\z\n\4\7\5\g\7\h\a\o\o\k\g\d\5\o\9\d\z\t\p\w\n\b\f\u\9\n\6 ]] 00:29:56.519 00:49:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:56.519 00:49:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:29:56.519 [2024-04-27 00:49:29.820956] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:56.519 [2024-04-27 00:49:29.821124] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143231 ] 00:29:56.519 [2024-04-27 00:49:29.985762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.780 [2024-04-27 00:49:30.205725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.415  Copying: 512/512 [B] (average 125 kBps) 00:29:58.415 00:29:58.415 00:49:31 -- dd/posix.sh@93 -- # [[ e92iggne9gaxfbrias8s0ewjqfuigtyzhv7wbojuj2m6pe4pjy8ine1jqmf4koaz8p6rol2fd92ujmi9u40d158bk4nt7eyhnv47ng39a5ylkyusww0pz1ikw802y02f4zers2snva2z28ucsxz8kzu3jlut2asnoejym3y7o0lkbgq71qfvbo3gplizp9ofuucgfo81re876tc76ysu4gg51ngpyaf60f6pn8d0ffv3sj9lmaqyl4ls59nxin22hzs24d2wps56uh9we7ow9ifd3a18p3oa1a3kw0gfw08bj737j64qp1q8fkiyn5gd2pf4a3ke65qw5kmyiyghi49evpnr5wt6trjl5w4kch5c0w4k3f32qhulj5y1r2zz0lk2pyyxvblldx1fmemzd16jcldkzo84liaj8w08woc18u4tz3vmk42d9t6868fy8kddrz1z9ztoe34vt54i596y2t4r9i7wdi9zn475g7haookgd5o9dztpwnbfu9n6 == \e\9\2\i\g\g\n\e\9\g\a\x\f\b\r\i\a\s\8\s\0\e\w\j\q\f\u\i\g\t\y\z\h\v\7\w\b\o\j\u\j\2\m\6\p\e\4\p\j\y\8\i\n\e\1\j\q\m\f\4\k\o\a\z\8\p\6\r\o\l\2\f\d\9\2\u\j\m\i\9\u\4\0\d\1\5\8\b\k\4\n\t\7\e\y\h\n\v\4\7\n\g\3\9\a\5\y\l\k\y\u\s\w\w\0\p\z\1\i\k\w\8\0\2\y\0\2\f\4\z\e\r\s\2\s\n\v\a\2\z\2\8\u\c\s\x\z\8\k\z\u\3\j\l\u\t\2\a\s\n\o\e\j\y\m\3\y\7\o\0\l\k\b\g\q\7\1\q\f\v\b\o\3\g\p\l\i\z\p\9\o\f\u\u\c\g\f\o\8\1\r\e\8\7\6\t\c\7\6\y\s\u\4\g\g\5\1\n\g\p\y\a\f\6\0\f\6\p\n\8\d\0\f\f\v\3\s\j\9\l\m\a\q\y\l\4\l\s\5\9\n\x\i\n\2\2\h\z\s\2\4\d\2\w\p\s\5\6\u\h\9\w\e\7\o\w\9\i\f\d\3\a\1\8\p\3\o\a\1\a\3\k\w\0\g\f\w\0\8\b\j\7\3\7\j\6\4\q\p\1\q\8\f\k\i\y\n\5\g\d\2\p\f\4\a\3\k\e\6\5\q\w\5\k\m\y\i\y\g\h\i\4\9\e\v\p\n\r\5\w\t\6\t\r\j\l\5\w\4\k\c\h\5\c\0\w\4\k\3\f\3\2\q\h\u\l\j\5\y\1\r\2\z\z\0\l\k\2\p\y\y\x\v\b\l\l\d\x\1\f\m\e\m\z\d\1\6\j\c\l\d\k\z\o\8\4\l\i\a\j\8\w\0\8\w\o\c\1\8\u\4\t\z\3\v\m\k\4\2\d\9\t\6\8\6\8\f\y\8\k\d\d\r\z\1\z\9\z\t\o\e\3\4\v\t\5\4\i\5\9\6\y\2\t\4\r\9\i\7\w\d\i\9\z\n\4\7\5\g\7\h\a\o\o\k\g\d\5\o\9\d\z\t\p\w\n\b\f\u\9\n\6 ]] 00:29:58.415 00:49:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:29:58.415 00:49:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:29:58.415 [2024-04-27 00:49:31.751860] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:29:58.415 [2024-04-27 00:49:31.752088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143253 ] 00:29:58.415 [2024-04-27 00:49:31.921451] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.673 [2024-04-27 00:49:32.126793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.309  Copying: 512/512 [B] (average 166 kBps) 00:30:00.309 00:30:00.309 ************************************ 00:30:00.309 END TEST dd_flags_misc_forced_aio 00:30:00.310 ************************************ 00:30:00.310 00:49:33 -- dd/posix.sh@93 -- # [[ e92iggne9gaxfbrias8s0ewjqfuigtyzhv7wbojuj2m6pe4pjy8ine1jqmf4koaz8p6rol2fd92ujmi9u40d158bk4nt7eyhnv47ng39a5ylkyusww0pz1ikw802y02f4zers2snva2z28ucsxz8kzu3jlut2asnoejym3y7o0lkbgq71qfvbo3gplizp9ofuucgfo81re876tc76ysu4gg51ngpyaf60f6pn8d0ffv3sj9lmaqyl4ls59nxin22hzs24d2wps56uh9we7ow9ifd3a18p3oa1a3kw0gfw08bj737j64qp1q8fkiyn5gd2pf4a3ke65qw5kmyiyghi49evpnr5wt6trjl5w4kch5c0w4k3f32qhulj5y1r2zz0lk2pyyxvblldx1fmemzd16jcldkzo84liaj8w08woc18u4tz3vmk42d9t6868fy8kddrz1z9ztoe34vt54i596y2t4r9i7wdi9zn475g7haookgd5o9dztpwnbfu9n6 == \e\9\2\i\g\g\n\e\9\g\a\x\f\b\r\i\a\s\8\s\0\e\w\j\q\f\u\i\g\t\y\z\h\v\7\w\b\o\j\u\j\2\m\6\p\e\4\p\j\y\8\i\n\e\1\j\q\m\f\4\k\o\a\z\8\p\6\r\o\l\2\f\d\9\2\u\j\m\i\9\u\4\0\d\1\5\8\b\k\4\n\t\7\e\y\h\n\v\4\7\n\g\3\9\a\5\y\l\k\y\u\s\w\w\0\p\z\1\i\k\w\8\0\2\y\0\2\f\4\z\e\r\s\2\s\n\v\a\2\z\2\8\u\c\s\x\z\8\k\z\u\3\j\l\u\t\2\a\s\n\o\e\j\y\m\3\y\7\o\0\l\k\b\g\q\7\1\q\f\v\b\o\3\g\p\l\i\z\p\9\o\f\u\u\c\g\f\o\8\1\r\e\8\7\6\t\c\7\6\y\s\u\4\g\g\5\1\n\g\p\y\a\f\6\0\f\6\p\n\8\d\0\f\f\v\3\s\j\9\l\m\a\q\y\l\4\l\s\5\9\n\x\i\n\2\2\h\z\s\2\4\d\2\w\p\s\5\6\u\h\9\w\e\7\o\w\9\i\f\d\3\a\1\8\p\3\o\a\1\a\3\k\w\0\g\f\w\0\8\b\j\7\3\7\j\6\4\q\p\1\q\8\f\k\i\y\n\5\g\d\2\p\f\4\a\3\k\e\6\5\q\w\5\k\m\y\i\y\g\h\i\4\9\e\v\p\n\r\5\w\t\6\t\r\j\l\5\w\4\k\c\h\5\c\0\w\4\k\3\f\3\2\q\h\u\l\j\5\y\1\r\2\z\z\0\l\k\2\p\y\y\x\v\b\l\l\d\x\1\f\m\e\m\z\d\1\6\j\c\l\d\k\z\o\8\4\l\i\a\j\8\w\0\8\w\o\c\1\8\u\4\t\z\3\v\m\k\4\2\d\9\t\6\8\6\8\f\y\8\k\d\d\r\z\1\z\9\z\t\o\e\3\4\v\t\5\4\i\5\9\6\y\2\t\4\r\9\i\7\w\d\i\9\z\n\4\7\5\g\7\h\a\o\o\k\g\d\5\o\9\d\z\t\p\w\n\b\f\u\9\n\6 ]] 00:30:00.310 00:30:00.310 real 0m14.899s 00:30:00.310 user 0m11.875s 00:30:00.310 sys 0m1.944s 00:30:00.310 00:49:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:00.310 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:30:00.310 00:49:33 -- dd/posix.sh@1 -- # cleanup 00:30:00.310 00:49:33 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:30:00.310 00:49:33 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:30:00.310 ************************************ 00:30:00.310 END TEST spdk_dd_posix 00:30:00.310 ************************************ 00:30:00.310 00:30:00.310 real 1m3.611s 00:30:00.310 user 0m49.227s 00:30:00.310 sys 0m8.308s 00:30:00.310 00:49:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:00.310 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:30:00.310 00:49:33 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:30:00.310 00:49:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:00.310 00:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:00.310 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:30:00.310 ************************************ 00:30:00.310 START TEST spdk_dd_malloc 00:30:00.310 ************************************ 00:30:00.310 00:49:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:30:00.310 * Looking for test storage... 00:30:00.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:00.310 00:49:33 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:00.310 00:49:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.310 00:49:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.310 00:49:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.310 00:49:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:00.310 00:49:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:00.310 00:49:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:00.310 00:49:33 -- paths/export.sh@5 -- # export PATH 00:30:00.310 00:49:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:00.310 00:49:33 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:30:00.310 00:49:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:00.310 00:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:00.310 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:30:00.310 ************************************ 00:30:00.310 START TEST dd_malloc_copy 00:30:00.310 ************************************ 00:30:00.310 00:49:33 -- common/autotest_common.sh@1111 -- # malloc_copy 00:30:00.310 00:49:33 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:30:00.310 00:49:33 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:30:00.310 00:49:33 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:30:00.310 00:49:33 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:30:00.310 00:49:33 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:30:00.310 00:49:33 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:30:00.310 00:49:33 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:30:00.310 00:49:33 -- dd/malloc.sh@28 -- # gen_conf 00:30:00.310 00:49:33 -- dd/common.sh@31 -- # xtrace_disable 00:30:00.310 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:30:00.568 { 00:30:00.568 "subsystems": [ 00:30:00.568 { 00:30:00.568 "subsystem": "bdev", 00:30:00.568 "config": [ 00:30:00.568 { 00:30:00.568 "params": { 00:30:00.568 "block_size": 512, 00:30:00.568 "num_blocks": 1048576, 00:30:00.568 "name": "malloc0" 00:30:00.568 }, 00:30:00.568 "method": "bdev_malloc_create" 00:30:00.568 }, 00:30:00.568 { 00:30:00.568 "params": { 00:30:00.568 "block_size": 512, 00:30:00.568 "num_blocks": 1048576, 00:30:00.568 "name": "malloc1" 00:30:00.568 }, 00:30:00.568 "method": "bdev_malloc_create" 00:30:00.568 }, 00:30:00.568 { 00:30:00.568 "method": "bdev_wait_for_examine" 00:30:00.568 } 00:30:00.568 ] 00:30:00.568 } 00:30:00.568 ] 00:30:00.568 } 00:30:00.568 [2024-04-27 00:49:33.946844] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:00.568 [2024-04-27 00:49:33.947037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143364 ] 00:30:00.568 [2024-04-27 00:49:34.118432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.827 [2024-04-27 00:49:34.329466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.059  Copying: 209/512 [MB] (209 MBps) Copying: 405/512 [MB] (196 MBps) Copying: 512/512 [MB] (average 199 MBps) 00:30:09.059 00:30:09.059 00:49:41 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:30:09.059 00:49:41 -- dd/malloc.sh@33 -- # gen_conf 00:30:09.059 00:49:41 -- dd/common.sh@31 -- # xtrace_disable 00:30:09.059 00:49:41 -- common/autotest_common.sh@10 -- # set +x 00:30:09.059 { 00:30:09.059 "subsystems": [ 00:30:09.059 { 00:30:09.059 "subsystem": "bdev", 00:30:09.059 "config": [ 00:30:09.059 { 00:30:09.059 "params": { 00:30:09.059 "block_size": 512, 00:30:09.059 "num_blocks": 1048576, 00:30:09.059 "name": "malloc0" 00:30:09.059 }, 00:30:09.059 "method": "bdev_malloc_create" 00:30:09.059 }, 00:30:09.059 { 00:30:09.059 "params": { 00:30:09.059 "block_size": 512, 00:30:09.059 "num_blocks": 1048576, 00:30:09.059 "name": "malloc1" 00:30:09.059 }, 00:30:09.059 "method": "bdev_malloc_create" 00:30:09.059 }, 00:30:09.059 { 00:30:09.059 "method": "bdev_wait_for_examine" 00:30:09.059 } 00:30:09.059 ] 00:30:09.059 } 00:30:09.059 ] 00:30:09.059 } 00:30:09.059 [2024-04-27 00:49:42.015628] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:09.059 [2024-04-27 00:49:42.015887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143464 ] 00:30:09.059 [2024-04-27 00:49:42.184935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.059 [2024-04-27 00:49:42.451374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.285  Copying: 197/512 [MB] (197 MBps) Copying: 381/512 [MB] (184 MBps) Copying: 512/512 [MB] (average 188 MBps) 00:30:17.285 00:30:17.285 00:30:17.285 real 0m16.363s 00:30:17.285 user 0m14.967s 00:30:17.285 sys 0m1.259s 00:30:17.285 00:49:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:17.285 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:30:17.285 ************************************ 00:30:17.285 END TEST dd_malloc_copy 00:30:17.285 ************************************ 00:30:17.285 ************************************ 00:30:17.285 END TEST spdk_dd_malloc 00:30:17.285 ************************************ 00:30:17.285 00:30:17.285 real 0m16.547s 00:30:17.285 user 0m15.049s 00:30:17.285 sys 0m1.366s 00:30:17.286 00:49:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:17.286 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:30:17.286 00:49:50 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:30:17.286 00:49:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:17.286 00:49:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:17.286 00:49:50 -- common/autotest_common.sh@10 -- # set +x 00:30:17.286 ************************************ 00:30:17.286 START TEST spdk_dd_bdev_to_bdev 00:30:17.286 ************************************ 00:30:17.286 00:49:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:30:17.286 * Looking for test storage... 00:30:17.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:17.286 00:49:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:17.286 00:49:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.286 00:49:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.286 00:49:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.286 00:49:50 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:17.286 00:49:50 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:17.286 00:49:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:17.286 00:49:50 -- paths/export.sh@5 -- # export PATH 00:30:17.286 00:49:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:30:17.286 00:49:50 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:30:17.286 [2024-04-27 00:49:50.527284] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:17.286 [2024-04-27 00:49:50.527481] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143632 ] 00:30:17.286 [2024-04-27 00:49:50.695753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.544 [2024-04-27 00:49:50.952717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.496  Copying: 256/256 [MB] (average 1174 MBps) 00:30:19.496 00:30:19.496 00:49:52 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:19.496 00:49:52 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:19.496 00:49:52 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:30:19.496 00:49:52 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:30:19.496 00:49:52 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:30:19.496 00:49:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:30:19.496 00:49:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:19.496 00:49:52 -- common/autotest_common.sh@10 -- # set +x 00:30:19.496 ************************************ 00:30:19.496 START TEST dd_inflate_file 00:30:19.496 ************************************ 00:30:19.496 00:49:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:30:19.496 [2024-04-27 00:49:52.763070] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:19.496 [2024-04-27 00:49:52.763252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143667 ] 00:30:19.496 [2024-04-27 00:49:52.923414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.758 [2024-04-27 00:49:53.140266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.387  Copying: 64/64 [MB] (average 955 MBps) 00:30:21.387 00:30:21.387 00:30:21.387 real 0m2.029s 00:30:21.387 user 0m1.638s 00:30:21.387 sys 0m0.262s 00:30:21.387 00:49:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:21.387 ************************************ 00:30:21.387 END TEST dd_inflate_file 00:30:21.387 00:49:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.387 ************************************ 00:30:21.387 00:49:54 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:30:21.387 00:49:54 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:30:21.387 00:49:54 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:30:21.387 00:49:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:30:21.387 00:49:54 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:30:21.387 00:49:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:21.387 00:49:54 -- dd/common.sh@31 -- # xtrace_disable 00:30:21.387 00:49:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.387 00:49:54 -- common/autotest_common.sh@10 -- # set +x 00:30:21.387 ************************************ 00:30:21.387 START TEST dd_copy_to_out_bdev 00:30:21.387 ************************************ 00:30:21.387 00:49:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:30:21.387 { 00:30:21.387 "subsystems": [ 00:30:21.387 { 00:30:21.387 "subsystem": "bdev", 00:30:21.387 "config": [ 00:30:21.387 { 00:30:21.387 "params": { 00:30:21.387 "block_size": 4096, 00:30:21.387 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:21.387 "name": "aio1" 00:30:21.387 }, 00:30:21.387 "method": "bdev_aio_create" 00:30:21.387 }, 00:30:21.387 { 00:30:21.387 "params": { 00:30:21.387 "trtype": "pcie", 00:30:21.387 "traddr": "0000:00:10.0", 00:30:21.387 "name": "Nvme0" 00:30:21.387 }, 00:30:21.387 "method": "bdev_nvme_attach_controller" 00:30:21.387 }, 00:30:21.387 { 00:30:21.387 "method": "bdev_wait_for_examine" 00:30:21.387 } 00:30:21.387 ] 00:30:21.387 } 00:30:21.387 ] 00:30:21.387 } 00:30:21.387 [2024-04-27 00:49:54.886666] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:21.387 [2024-04-27 00:49:54.886870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143731 ] 00:30:21.645 [2024-04-27 00:49:55.046128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.917 [2024-04-27 00:49:55.260083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.929  Copying: 45/64 [MB] (45 MBps) Copying: 64/64 [MB] (average 46 MBps) 00:30:24.929 00:30:24.929 00:30:24.929 real 0m3.431s 00:30:24.929 user 0m3.064s 00:30:24.929 sys 0m0.278s 00:30:24.929 00:49:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:24.929 00:49:58 -- common/autotest_common.sh@10 -- # set +x 00:30:24.929 ************************************ 00:30:24.929 END TEST dd_copy_to_out_bdev 00:30:24.929 ************************************ 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:30:24.929 00:49:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:24.929 00:49:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:24.929 00:49:58 -- common/autotest_common.sh@10 -- # set +x 00:30:24.929 ************************************ 00:30:24.929 START TEST dd_offset_magic 00:30:24.929 ************************************ 00:30:24.929 00:49:58 -- common/autotest_common.sh@1111 -- # offset_magic 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:30:24.929 00:49:58 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:30:24.929 00:49:58 -- dd/common.sh@31 -- # xtrace_disable 00:30:24.929 00:49:58 -- common/autotest_common.sh@10 -- # set +x 00:30:24.929 { 00:30:24.929 "subsystems": [ 00:30:24.929 { 00:30:24.929 "subsystem": "bdev", 00:30:24.929 "config": [ 00:30:24.929 { 00:30:24.929 "params": { 00:30:24.929 "block_size": 4096, 00:30:24.929 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:24.929 "name": "aio1" 00:30:24.929 }, 00:30:24.929 "method": "bdev_aio_create" 00:30:24.929 }, 00:30:24.929 { 00:30:24.929 "params": { 00:30:24.929 "trtype": "pcie", 00:30:24.929 "traddr": "0000:00:10.0", 00:30:24.929 "name": "Nvme0" 00:30:24.929 }, 00:30:24.929 "method": "bdev_nvme_attach_controller" 00:30:24.929 }, 00:30:24.929 { 00:30:24.929 "method": "bdev_wait_for_examine" 00:30:24.929 } 00:30:24.929 ] 00:30:24.929 } 00:30:24.929 ] 00:30:24.929 } 00:30:24.929 [2024-04-27 00:49:58.414503] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:24.929 [2024-04-27 00:49:58.414669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143799 ] 00:30:25.187 [2024-04-27 00:49:58.574823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.444 [2024-04-27 00:49:58.791162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.311  Copying: 65/65 [MB] (average 150 MBps) 00:30:27.311 00:30:27.311 00:50:00 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:30:27.311 00:50:00 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:30:27.311 00:50:00 -- dd/common.sh@31 -- # xtrace_disable 00:30:27.311 00:50:00 -- common/autotest_common.sh@10 -- # set +x 00:30:27.311 { 00:30:27.311 "subsystems": [ 00:30:27.311 { 00:30:27.311 "subsystem": "bdev", 00:30:27.311 "config": [ 00:30:27.311 { 00:30:27.311 "params": { 00:30:27.311 "block_size": 4096, 00:30:27.311 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:27.311 "name": "aio1" 00:30:27.311 }, 00:30:27.311 "method": "bdev_aio_create" 00:30:27.311 }, 00:30:27.311 { 00:30:27.311 "params": { 00:30:27.311 "trtype": "pcie", 00:30:27.311 "traddr": "0000:00:10.0", 00:30:27.311 "name": "Nvme0" 00:30:27.311 }, 00:30:27.311 "method": "bdev_nvme_attach_controller" 00:30:27.311 }, 00:30:27.311 { 00:30:27.311 "method": "bdev_wait_for_examine" 00:30:27.311 } 00:30:27.311 ] 00:30:27.311 } 00:30:27.311 ] 00:30:27.311 } 00:30:27.311 [2024-04-27 00:50:00.884270] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:27.311 [2024-04-27 00:50:00.884466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143837 ] 00:30:27.569 [2024-04-27 00:50:01.054916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.826 [2024-04-27 00:50:01.268587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.761  Copying: 1024/1024 [kB] (average 500 MBps) 00:30:29.761 00:30:29.761 00:50:02 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:30:29.761 00:50:02 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:30:29.761 00:50:02 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:30:29.761 00:50:02 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:30:29.761 00:50:02 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:30:29.761 00:50:02 -- dd/common.sh@31 -- # xtrace_disable 00:30:29.761 00:50:02 -- common/autotest_common.sh@10 -- # set +x 00:30:29.761 { 00:30:29.761 "subsystems": [ 00:30:29.761 { 00:30:29.761 "subsystem": "bdev", 00:30:29.761 "config": [ 00:30:29.761 { 00:30:29.761 "params": { 00:30:29.761 "block_size": 4096, 00:30:29.761 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:29.761 "name": "aio1" 00:30:29.761 }, 00:30:29.761 "method": "bdev_aio_create" 00:30:29.761 }, 00:30:29.761 { 00:30:29.761 "params": { 00:30:29.761 "trtype": "pcie", 00:30:29.761 "traddr": "0000:00:10.0", 00:30:29.761 "name": "Nvme0" 00:30:29.761 }, 00:30:29.761 "method": "bdev_nvme_attach_controller" 00:30:29.761 }, 00:30:29.761 { 00:30:29.761 "method": "bdev_wait_for_examine" 00:30:29.761 } 00:30:29.761 ] 00:30:29.761 } 00:30:29.761 ] 00:30:29.761 } 00:30:29.761 [2024-04-27 00:50:03.007392] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:29.761 [2024-04-27 00:50:03.007618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143872 ] 00:30:29.761 [2024-04-27 00:50:03.180977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.019 [2024-04-27 00:50:03.405164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.974  Copying: 65/65 [MB] (average 208 MBps) 00:30:31.974 00:30:31.974 00:50:05 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:30:31.974 00:50:05 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:30:31.974 00:50:05 -- dd/common.sh@31 -- # xtrace_disable 00:30:31.974 00:50:05 -- common/autotest_common.sh@10 -- # set +x 00:30:31.974 { 00:30:31.974 "subsystems": [ 00:30:31.974 { 00:30:31.974 "subsystem": "bdev", 00:30:31.974 "config": [ 00:30:31.974 { 00:30:31.974 "params": { 00:30:31.974 "block_size": 4096, 00:30:31.974 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:31.974 "name": "aio1" 00:30:31.974 }, 00:30:31.974 "method": "bdev_aio_create" 00:30:31.974 }, 00:30:31.974 { 00:30:31.974 "params": { 00:30:31.974 "trtype": "pcie", 00:30:31.974 "traddr": "0000:00:10.0", 00:30:31.974 "name": "Nvme0" 00:30:31.974 }, 00:30:31.974 "method": "bdev_nvme_attach_controller" 00:30:31.974 }, 00:30:31.974 { 00:30:31.974 "method": "bdev_wait_for_examine" 00:30:31.974 } 00:30:31.974 ] 00:30:31.974 } 00:30:31.974 ] 00:30:31.974 } 00:30:31.974 [2024-04-27 00:50:05.387022] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:31.975 [2024-04-27 00:50:05.387237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143902 ] 00:30:31.975 [2024-04-27 00:50:05.555091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.232 [2024-04-27 00:50:05.772009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.166  Copying: 1024/1024 [kB] (average 1000 MBps) 00:30:34.166 00:30:34.166 00:50:07 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:30:34.166 00:50:07 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:30:34.166 00:30:34.166 real 0m9.035s 00:30:34.166 user 0m6.868s 00:30:34.166 sys 0m1.147s 00:30:34.166 ************************************ 00:30:34.166 00:50:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:34.166 00:50:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.166 END TEST dd_offset_magic 00:30:34.166 ************************************ 00:30:34.166 00:50:07 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:30:34.166 00:50:07 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:30:34.166 00:50:07 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:30:34.166 00:50:07 -- dd/common.sh@11 -- # local nvme_ref= 00:30:34.166 00:50:07 -- dd/common.sh@12 -- # local size=4194330 00:30:34.166 00:50:07 -- dd/common.sh@14 -- # local bs=1048576 00:30:34.166 00:50:07 -- dd/common.sh@15 -- # local count=5 00:30:34.166 00:50:07 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:30:34.166 00:50:07 -- dd/common.sh@18 -- # gen_conf 00:30:34.166 00:50:07 -- dd/common.sh@31 -- # xtrace_disable 00:30:34.166 00:50:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.166 { 00:30:34.166 "subsystems": [ 00:30:34.166 { 00:30:34.166 "subsystem": "bdev", 00:30:34.166 "config": [ 00:30:34.166 { 00:30:34.166 "params": { 00:30:34.166 "block_size": 4096, 00:30:34.166 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:34.166 "name": "aio1" 00:30:34.166 }, 00:30:34.166 "method": "bdev_aio_create" 00:30:34.166 }, 00:30:34.166 { 00:30:34.166 "params": { 00:30:34.166 "trtype": "pcie", 00:30:34.166 "traddr": "0000:00:10.0", 00:30:34.166 "name": "Nvme0" 00:30:34.166 }, 00:30:34.166 "method": "bdev_nvme_attach_controller" 00:30:34.166 }, 00:30:34.166 { 00:30:34.166 "method": "bdev_wait_for_examine" 00:30:34.166 } 00:30:34.166 ] 00:30:34.166 } 00:30:34.166 ] 00:30:34.166 } 00:30:34.166 [2024-04-27 00:50:07.505438] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:34.166 [2024-04-27 00:50:07.506432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143954 ] 00:30:34.166 [2024-04-27 00:50:07.683642] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.424 [2024-04-27 00:50:07.902083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.369  Copying: 5120/5120 [kB] (average 1000 MBps) 00:30:36.369 00:30:36.369 00:50:09 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:30:36.369 00:50:09 -- dd/common.sh@10 -- # local bdev=aio1 00:30:36.369 00:50:09 -- dd/common.sh@11 -- # local nvme_ref= 00:30:36.369 00:50:09 -- dd/common.sh@12 -- # local size=4194330 00:30:36.369 00:50:09 -- dd/common.sh@14 -- # local bs=1048576 00:30:36.369 00:50:09 -- dd/common.sh@15 -- # local count=5 00:30:36.369 00:50:09 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:30:36.369 00:50:09 -- dd/common.sh@18 -- # gen_conf 00:30:36.369 00:50:09 -- dd/common.sh@31 -- # xtrace_disable 00:30:36.369 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:30:36.369 { 00:30:36.369 "subsystems": [ 00:30:36.369 { 00:30:36.369 "subsystem": "bdev", 00:30:36.369 "config": [ 00:30:36.369 { 00:30:36.369 "params": { 00:30:36.369 "block_size": 4096, 00:30:36.369 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:30:36.369 "name": "aio1" 00:30:36.369 }, 00:30:36.369 "method": "bdev_aio_create" 00:30:36.369 }, 00:30:36.369 { 00:30:36.369 "params": { 00:30:36.369 "trtype": "pcie", 00:30:36.369 "traddr": "0000:00:10.0", 00:30:36.369 "name": "Nvme0" 00:30:36.369 }, 00:30:36.369 "method": "bdev_nvme_attach_controller" 00:30:36.369 }, 00:30:36.369 { 00:30:36.369 "method": "bdev_wait_for_examine" 00:30:36.369 } 00:30:36.369 ] 00:30:36.369 } 00:30:36.369 ] 00:30:36.369 } 00:30:36.369 [2024-04-27 00:50:09.660400] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:36.369 [2024-04-27 00:50:09.660671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143991 ] 00:30:36.369 [2024-04-27 00:50:09.834509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.627 [2024-04-27 00:50:10.115047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.569  Copying: 5120/5120 [kB] (average 238 MBps) 00:30:38.569 00:30:38.569 00:50:11 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:30:38.569 00:30:38.569 real 0m21.468s 00:30:38.569 user 0m16.941s 00:30:38.569 sys 0m2.877s 00:30:38.569 00:50:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:38.569 ************************************ 00:30:38.569 END TEST spdk_dd_bdev_to_bdev 00:30:38.569 ************************************ 00:30:38.569 00:50:11 -- common/autotest_common.sh@10 -- # set +x 00:30:38.569 00:50:11 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:30:38.569 00:50:11 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:30:38.569 00:50:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:38.569 00:50:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.569 00:50:11 -- common/autotest_common.sh@10 -- # set +x 00:30:38.569 ************************************ 00:30:38.569 START TEST spdk_dd_sparse 00:30:38.569 ************************************ 00:30:38.569 00:50:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:30:38.569 * Looking for test storage... 00:30:38.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:38.569 00:50:11 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:38.569 00:50:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.569 00:50:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.569 00:50:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.569 00:50:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:38.569 00:50:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:38.569 00:50:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:38.569 00:50:11 -- paths/export.sh@5 -- # export PATH 00:30:38.569 00:50:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:38.569 00:50:11 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:30:38.569 00:50:11 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:30:38.569 00:50:11 -- dd/sparse.sh@110 -- # file1=file_zero1 00:30:38.569 00:50:11 -- dd/sparse.sh@111 -- # file2=file_zero2 00:30:38.569 00:50:11 -- dd/sparse.sh@112 -- # file3=file_zero3 00:30:38.569 00:50:11 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:30:38.569 00:50:11 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:30:38.569 00:50:11 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:30:38.569 00:50:11 -- dd/sparse.sh@118 -- # prepare 00:30:38.569 00:50:11 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:30:38.569 00:50:12 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:30:38.569 1+0 records in 00:30:38.569 1+0 records out 00:30:38.569 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00834285 s, 503 MB/s 00:30:38.569 00:50:12 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:30:38.569 1+0 records in 00:30:38.569 1+0 records out 00:30:38.569 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00793589 s, 529 MB/s 00:30:38.569 00:50:12 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:30:38.569 1+0 records in 00:30:38.569 1+0 records out 00:30:38.569 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00879723 s, 477 MB/s 00:30:38.569 00:50:12 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:30:38.569 00:50:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:38.569 00:50:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:38.569 00:50:12 -- common/autotest_common.sh@10 -- # set +x 00:30:38.569 ************************************ 00:30:38.569 START TEST dd_sparse_file_to_file 00:30:38.569 ************************************ 00:30:38.569 00:50:12 -- common/autotest_common.sh@1111 -- # file_to_file 00:30:38.569 00:50:12 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:30:38.569 00:50:12 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:30:38.569 00:50:12 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:30:38.569 00:50:12 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:30:38.569 00:50:12 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:30:38.569 00:50:12 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:30:38.569 00:50:12 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:30:38.569 00:50:12 -- dd/sparse.sh@41 -- # gen_conf 00:30:38.569 00:50:12 -- dd/common.sh@31 -- # xtrace_disable 00:30:38.569 00:50:12 -- common/autotest_common.sh@10 -- # set +x 00:30:38.569 { 00:30:38.569 "subsystems": [ 00:30:38.569 { 00:30:38.569 "subsystem": "bdev", 00:30:38.569 "config": [ 00:30:38.569 { 00:30:38.569 "params": { 00:30:38.569 "block_size": 4096, 00:30:38.569 "filename": "dd_sparse_aio_disk", 00:30:38.569 "name": "dd_aio" 00:30:38.569 }, 00:30:38.569 "method": "bdev_aio_create" 00:30:38.569 }, 00:30:38.569 { 00:30:38.569 "params": { 00:30:38.569 "lvs_name": "dd_lvstore", 00:30:38.569 "bdev_name": "dd_aio" 00:30:38.569 }, 00:30:38.569 "method": "bdev_lvol_create_lvstore" 00:30:38.569 }, 00:30:38.569 { 00:30:38.569 "method": "bdev_wait_for_examine" 00:30:38.569 } 00:30:38.569 ] 00:30:38.569 } 00:30:38.569 ] 00:30:38.569 } 00:30:38.569 [2024-04-27 00:50:12.135606] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:38.569 [2024-04-27 00:50:12.135765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144084 ] 00:30:38.828 [2024-04-27 00:50:12.295079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.086 [2024-04-27 00:50:12.512422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.719  Copying: 12/36 [MB] (average 857 MBps) 00:30:40.719 00:30:40.719 00:50:14 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:30:40.719 00:50:14 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:30:40.719 00:50:14 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:30:40.719 00:50:14 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:30:40.719 00:50:14 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:30:40.719 00:50:14 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:30:40.719 00:50:14 -- dd/sparse.sh@52 -- # stat1_b=24576 00:30:40.719 00:50:14 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:30:40.719 00:50:14 -- dd/sparse.sh@53 -- # stat2_b=24576 00:30:40.719 00:50:14 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:30:40.719 00:30:40.719 real 0m2.160s 00:30:40.719 user 0m1.749s 00:30:40.719 sys 0m0.256s 00:30:40.719 00:50:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:40.719 00:50:14 -- common/autotest_common.sh@10 -- # set +x 00:30:40.719 ************************************ 00:30:40.719 END TEST dd_sparse_file_to_file 00:30:40.719 ************************************ 00:30:40.719 00:50:14 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:30:40.719 00:50:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:40.719 00:50:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:40.719 00:50:14 -- common/autotest_common.sh@10 -- # set +x 00:30:40.978 ************************************ 00:30:40.978 START TEST dd_sparse_file_to_bdev 00:30:40.978 ************************************ 00:30:40.978 00:50:14 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:30:40.978 00:50:14 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:30:40.978 00:50:14 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:30:40.978 00:50:14 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:30:40.978 00:50:14 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:30:40.978 00:50:14 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:30:40.978 00:50:14 -- dd/sparse.sh@73 -- # gen_conf 00:30:40.978 00:50:14 -- dd/common.sh@31 -- # xtrace_disable 00:30:40.978 00:50:14 -- common/autotest_common.sh@10 -- # set +x 00:30:40.978 { 00:30:40.978 "subsystems": [ 00:30:40.978 { 00:30:40.978 "subsystem": "bdev", 00:30:40.978 "config": [ 00:30:40.978 { 00:30:40.978 "params": { 00:30:40.978 "block_size": 4096, 00:30:40.978 "filename": "dd_sparse_aio_disk", 00:30:40.978 "name": "dd_aio" 00:30:40.978 }, 00:30:40.978 "method": "bdev_aio_create" 00:30:40.978 }, 00:30:40.978 { 00:30:40.978 "params": { 00:30:40.978 "lvs_name": "dd_lvstore", 00:30:40.978 "lvol_name": "dd_lvol", 00:30:40.978 "size": 37748736, 00:30:40.978 "thin_provision": true 00:30:40.978 }, 00:30:40.978 "method": "bdev_lvol_create" 00:30:40.978 }, 00:30:40.978 { 00:30:40.978 "method": "bdev_wait_for_examine" 00:30:40.978 } 00:30:40.978 ] 00:30:40.978 } 00:30:40.978 ] 00:30:40.978 } 00:30:40.978 [2024-04-27 00:50:14.389864] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:40.978 [2024-04-27 00:50:14.390057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144155 ] 00:30:40.978 [2024-04-27 00:50:14.559048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.236 [2024-04-27 00:50:14.817651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.803 [2024-04-27 00:50:15.142898] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:30:41.803  Copying: 12/36 [MB] (average 545 MBps)[2024-04-27 00:50:15.202471] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:30:43.178 00:30:43.178 00:30:43.178 00:30:43.178 real 0m2.193s 00:30:43.178 user 0m1.823s 00:30:43.178 sys 0m0.269s 00:30:43.178 00:50:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:43.178 00:50:16 -- common/autotest_common.sh@10 -- # set +x 00:30:43.178 ************************************ 00:30:43.178 END TEST dd_sparse_file_to_bdev 00:30:43.178 ************************************ 00:30:43.178 00:50:16 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:30:43.178 00:50:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:43.178 00:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:43.178 00:50:16 -- common/autotest_common.sh@10 -- # set +x 00:30:43.178 ************************************ 00:30:43.178 START TEST dd_sparse_bdev_to_file 00:30:43.178 ************************************ 00:30:43.178 00:50:16 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:30:43.178 00:50:16 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:30:43.178 00:50:16 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:30:43.178 00:50:16 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:30:43.178 00:50:16 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:30:43.178 00:50:16 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:30:43.178 00:50:16 -- dd/sparse.sh@91 -- # gen_conf 00:30:43.178 00:50:16 -- dd/common.sh@31 -- # xtrace_disable 00:30:43.178 00:50:16 -- common/autotest_common.sh@10 -- # set +x 00:30:43.178 { 00:30:43.178 "subsystems": [ 00:30:43.178 { 00:30:43.178 "subsystem": "bdev", 00:30:43.178 "config": [ 00:30:43.178 { 00:30:43.178 "params": { 00:30:43.178 "block_size": 4096, 00:30:43.178 "filename": "dd_sparse_aio_disk", 00:30:43.178 "name": "dd_aio" 00:30:43.178 }, 00:30:43.178 "method": "bdev_aio_create" 00:30:43.178 }, 00:30:43.178 { 00:30:43.178 "method": "bdev_wait_for_examine" 00:30:43.178 } 00:30:43.178 ] 00:30:43.178 } 00:30:43.178 ] 00:30:43.178 } 00:30:43.178 [2024-04-27 00:50:16.671390] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:43.178 [2024-04-27 00:50:16.671596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144225 ] 00:30:43.436 [2024-04-27 00:50:16.842938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.694 [2024-04-27 00:50:17.112534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.326  Copying: 12/36 [MB] (average 1000 MBps) 00:30:45.326 00:30:45.326 00:50:18 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:30:45.326 00:50:18 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:30:45.326 00:50:18 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:30:45.326 00:50:18 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:30:45.326 00:50:18 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:30:45.326 00:50:18 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:30:45.326 00:50:18 -- dd/sparse.sh@102 -- # stat2_b=24576 00:30:45.326 00:50:18 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:30:45.326 00:50:18 -- dd/sparse.sh@103 -- # stat3_b=24576 00:30:45.326 00:50:18 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:30:45.326 00:30:45.326 real 0m2.166s 00:30:45.326 user 0m1.769s 00:30:45.326 sys 0m0.299s 00:30:45.326 00:50:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:45.326 00:50:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.326 ************************************ 00:30:45.326 END TEST dd_sparse_bdev_to_file 00:30:45.326 ************************************ 00:30:45.326 00:50:18 -- dd/sparse.sh@1 -- # cleanup 00:30:45.326 00:50:18 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:30:45.326 00:50:18 -- dd/sparse.sh@12 -- # rm file_zero1 00:30:45.326 00:50:18 -- dd/sparse.sh@13 -- # rm file_zero2 00:30:45.326 00:50:18 -- dd/sparse.sh@14 -- # rm file_zero3 00:30:45.326 00:30:45.326 real 0m6.921s 00:30:45.326 user 0m5.518s 00:30:45.326 sys 0m1.042s 00:30:45.326 00:50:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:45.326 00:50:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.326 ************************************ 00:30:45.326 END TEST spdk_dd_sparse 00:30:45.326 ************************************ 00:30:45.326 00:50:18 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:30:45.326 00:50:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.326 00:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.326 00:50:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.586 ************************************ 00:30:45.586 START TEST spdk_dd_negative 00:30:45.586 ************************************ 00:30:45.586 00:50:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:30:45.586 * Looking for test storage... 00:30:45.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:30:45.586 00:50:18 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:45.586 00:50:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.586 00:50:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.586 00:50:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.586 00:50:18 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:45.586 00:50:18 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:45.586 00:50:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:45.586 00:50:18 -- paths/export.sh@5 -- # export PATH 00:30:45.586 00:50:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:30:45.586 00:50:19 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:45.586 00:50:19 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:45.586 00:50:19 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:45.586 00:50:19 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:30:45.586 00:50:19 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:30:45.586 00:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.586 00:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.586 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:45.586 ************************************ 00:30:45.586 START TEST dd_invalid_arguments 00:30:45.586 ************************************ 00:30:45.587 00:50:19 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:30:45.587 00:50:19 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:30:45.587 00:50:19 -- common/autotest_common.sh@638 -- # local es=0 00:30:45.587 00:50:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:30:45.587 00:50:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.587 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:45.587 00:50:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.587 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:45.587 00:50:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.587 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:45.587 00:50:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.587 00:50:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:45.587 00:50:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:30:45.587 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:30:45.587 00:30:45.587 CPU options: 00:30:45.587 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:30:45.587 (like [0,1,10]) 00:30:45.587 --lcores lcore to CPU mapping list. The list is in the format: 00:30:45.587 [<,lcores[@CPUs]>...] 00:30:45.587 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:30:45.587 Within the group, '-' is used for range separator, 00:30:45.587 ',' is used for single number separator. 00:30:45.587 '( )' can be omitted for single element group, 00:30:45.587 '@' can be omitted if cpus and lcores have the same value 00:30:45.587 --disable-cpumask-locks Disable CPU core lock files. 00:30:45.587 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:30:45.587 pollers in the app support interrupt mode) 00:30:45.587 -p, --main-core main (primary) core for DPDK 00:30:45.587 00:30:45.587 Configuration options: 00:30:45.587 -c, --config, --json JSON config file 00:30:45.587 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:30:45.587 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:30:45.587 --wait-for-rpc wait for RPCs to initialize subsystems 00:30:45.587 --rpcs-allowed comma-separated list of permitted RPCS 00:30:45.587 --json-ignore-init-errors don't exit on invalid config entry 00:30:45.587 00:30:45.587 Memory options: 00:30:45.587 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:30:45.587 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:30:45.587 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:30:45.587 -R, --huge-unlink unlink huge files after initialization 00:30:45.587 -n, --mem-channels number of memory channels used for DPDK 00:30:45.587 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:30:45.587 --msg-mempool-size global message memory pool size in count (default: 262143) 00:30:45.587 --no-huge run without using hugepages 00:30:45.587 -i, --shm-id shared memory ID (optional) 00:30:45.587 -g, --single-file-segments force creating just one hugetlbfs file 00:30:45.587 00:30:45.587 PCI options: 00:30:45.587 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:30:45.587 -B, --pci-blocked pci addr to block (can be used more than once) 00:30:45.587 -u, --no-pci disable PCI access 00:30:45.587 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:30:45.587 00:30:45.587 Log options: 00:30:45.587 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:30:45.587 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:30:45.587 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:30:45.587 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:30:45.587 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:30:45.587 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:30:45.587 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:30:45.587 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:30:45.587 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:30:45.587 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:30:45.587 virtio_vfio_user, vmd) 00:30:45.587 --silence-noticelog disable notice level logging to stderr 00:30:45.587 00:30:45.587 Trace options: 00:30:45.587 --num-trace-entries number of trace entries for each core, must be power of 2, 00:30:45.587 setting 0 to disable trace (default 32768) 00:30:45.587 Tracepoints vary in size and can use more than one trace entry. 00:30:45.587 -e, --tpoint-group [:] 00:30:45.587 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:30:45.587 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:30:45.587 [2024-04-27 00:50:19.120338] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:30:45.587 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:30:45.587 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:30:45.587 a tracepoint group. First tpoint inside a group can be enabled by 00:30:45.587 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:30:45.587 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:30:45.587 in /include/spdk_internal/trace_defs.h 00:30:45.587 00:30:45.587 Other options: 00:30:45.587 -h, --help show this usage 00:30:45.587 -v, --version print SPDK version 00:30:45.587 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:30:45.587 --env-context Opaque context for use of the env implementation 00:30:45.587 00:30:45.587 Application specific: 00:30:45.587 [--------- DD Options ---------] 00:30:45.587 --if Input file. Must specify either --if or --ib. 00:30:45.587 --ib Input bdev. Must specifier either --if or --ib 00:30:45.587 --of Output file. Must specify either --of or --ob. 00:30:45.587 --ob Output bdev. Must specify either --of or --ob. 00:30:45.587 --iflag Input file flags. 00:30:45.587 --oflag Output file flags. 00:30:45.587 --bs I/O unit size (default: 4096) 00:30:45.587 --qd Queue depth (default: 2) 00:30:45.587 --count I/O unit count. The number of I/O units to copy. (default: all) 00:30:45.587 --skip Skip this many I/O units at start of input. (default: 0) 00:30:45.587 --seek Skip this many I/O units at start of output. (default: 0) 00:30:45.587 --aio Force usage of AIO. (by default io_uring is used if available) 00:30:45.587 --sparse Enable hole skipping in input target 00:30:45.587 Available iflag and oflag values: 00:30:45.587 append - append mode 00:30:45.587 direct - use direct I/O for data 00:30:45.587 directory - fail unless a directory 00:30:45.587 dsync - use synchronized I/O for data 00:30:45.587 noatime - do not update access time 00:30:45.587 noctty - do not assign controlling terminal from file 00:30:45.587 nofollow - do not follow symlinks 00:30:45.587 nonblock - use non-blocking I/O 00:30:45.587 sync - use synchronized I/O for data and metadata 00:30:45.587 00:50:19 -- common/autotest_common.sh@641 -- # es=2 00:30:45.587 00:50:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:45.587 00:50:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:45.587 00:50:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:45.587 00:30:45.587 real 0m0.114s 00:30:45.587 user 0m0.063s 00:30:45.587 sys 0m0.052s 00:30:45.587 00:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:45.587 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:45.587 ************************************ 00:30:45.587 END TEST dd_invalid_arguments 00:30:45.587 ************************************ 00:30:45.846 00:50:19 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:30:45.846 00:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.846 00:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.846 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:45.846 ************************************ 00:30:45.846 START TEST dd_double_input 00:30:45.846 ************************************ 00:30:45.846 00:50:19 -- common/autotest_common.sh@1111 -- # double_input 00:30:45.846 00:50:19 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:30:45.846 00:50:19 -- common/autotest_common.sh@638 -- # local es=0 00:30:45.846 00:50:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:30:45.846 00:50:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.846 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:45.846 00:50:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.846 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:45.846 00:50:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.846 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:45.846 00:50:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.847 00:50:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:45.847 00:50:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:30:45.847 [2024-04-27 00:50:19.315866] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:30:45.847 00:50:19 -- common/autotest_common.sh@641 -- # es=22 00:30:45.847 00:50:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:45.847 00:50:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:45.847 00:50:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:45.847 00:30:45.847 real 0m0.112s 00:30:45.847 user 0m0.043s 00:30:45.847 sys 0m0.070s 00:30:45.847 00:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:45.847 ************************************ 00:30:45.847 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:45.847 END TEST dd_double_input 00:30:45.847 ************************************ 00:30:45.847 00:50:19 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:30:45.847 00:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.847 00:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.847 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:46.105 ************************************ 00:30:46.105 START TEST dd_double_output 00:30:46.105 ************************************ 00:30:46.105 00:50:19 -- common/autotest_common.sh@1111 -- # double_output 00:30:46.105 00:50:19 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:30:46.105 00:50:19 -- common/autotest_common.sh@638 -- # local es=0 00:30:46.105 00:50:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:30:46.105 00:50:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.105 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.105 00:50:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.105 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.105 00:50:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.105 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.106 00:50:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.106 00:50:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:46.106 00:50:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:30:46.106 [2024-04-27 00:50:19.514460] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:30:46.106 00:50:19 -- common/autotest_common.sh@641 -- # es=22 00:30:46.106 00:50:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:46.106 00:50:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:46.106 00:50:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:46.106 00:30:46.106 real 0m0.117s 00:30:46.106 user 0m0.053s 00:30:46.106 sys 0m0.065s 00:30:46.106 00:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:46.106 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:46.106 ************************************ 00:30:46.106 END TEST dd_double_output 00:30:46.106 ************************************ 00:30:46.106 00:50:19 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:30:46.106 00:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:46.106 00:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:46.106 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:46.106 ************************************ 00:30:46.106 START TEST dd_no_input 00:30:46.106 ************************************ 00:30:46.106 00:50:19 -- common/autotest_common.sh@1111 -- # no_input 00:30:46.106 00:50:19 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:30:46.106 00:50:19 -- common/autotest_common.sh@638 -- # local es=0 00:30:46.106 00:50:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:30:46.106 00:50:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.106 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.106 00:50:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.106 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.106 00:50:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.106 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.106 00:50:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.106 00:50:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:46.106 00:50:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:30:46.364 [2024-04-27 00:50:19.717530] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:30:46.364 00:50:19 -- common/autotest_common.sh@641 -- # es=22 00:30:46.364 00:50:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:46.364 00:50:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:46.364 00:50:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:46.364 00:30:46.364 real 0m0.115s 00:30:46.364 user 0m0.055s 00:30:46.364 sys 0m0.060s 00:30:46.364 00:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:46.364 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:46.364 ************************************ 00:30:46.364 END TEST dd_no_input 00:30:46.365 ************************************ 00:30:46.365 00:50:19 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:30:46.365 00:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:46.365 00:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:46.365 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:46.365 ************************************ 00:30:46.365 START TEST dd_no_output 00:30:46.365 ************************************ 00:30:46.365 00:50:19 -- common/autotest_common.sh@1111 -- # no_output 00:30:46.365 00:50:19 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:46.365 00:50:19 -- common/autotest_common.sh@638 -- # local es=0 00:30:46.365 00:50:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:46.365 00:50:19 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.365 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.365 00:50:19 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.365 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.365 00:50:19 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.365 00:50:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.365 00:50:19 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.365 00:50:19 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:46.365 00:50:19 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:30:46.365 [2024-04-27 00:50:19.922832] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:30:46.623 00:50:19 -- common/autotest_common.sh@641 -- # es=22 00:30:46.623 00:50:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:46.623 00:50:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:46.623 00:50:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:46.623 00:30:46.623 real 0m0.117s 00:30:46.623 user 0m0.058s 00:30:46.623 sys 0m0.060s 00:30:46.623 00:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:46.623 00:50:19 -- common/autotest_common.sh@10 -- # set +x 00:30:46.623 ************************************ 00:30:46.623 END TEST dd_no_output 00:30:46.623 ************************************ 00:30:46.623 00:50:20 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:30:46.623 00:50:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:46.623 00:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:46.623 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:30:46.623 ************************************ 00:30:46.623 START TEST dd_wrong_blocksize 00:30:46.623 ************************************ 00:30:46.623 00:50:20 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:30:46.623 00:50:20 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:30:46.623 00:50:20 -- common/autotest_common.sh@638 -- # local es=0 00:30:46.623 00:50:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:30:46.623 00:50:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.623 00:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.623 00:50:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.623 00:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.623 00:50:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.623 00:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.623 00:50:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.623 00:50:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:46.623 00:50:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:30:46.623 [2024-04-27 00:50:20.121503] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:30:46.623 00:50:20 -- common/autotest_common.sh@641 -- # es=22 00:30:46.623 00:50:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:46.623 00:50:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:46.623 00:50:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:46.623 00:30:46.623 real 0m0.114s 00:30:46.623 user 0m0.057s 00:30:46.623 sys 0m0.056s 00:30:46.623 00:50:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:46.623 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:30:46.623 ************************************ 00:30:46.623 END TEST dd_wrong_blocksize 00:30:46.623 ************************************ 00:30:46.623 00:50:20 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:30:46.623 00:50:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:46.623 00:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:46.623 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:30:46.881 ************************************ 00:30:46.881 START TEST dd_smaller_blocksize 00:30:46.881 ************************************ 00:30:46.881 00:50:20 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:30:46.881 00:50:20 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:30:46.881 00:50:20 -- common/autotest_common.sh@638 -- # local es=0 00:30:46.882 00:50:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:30:46.882 00:50:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.882 00:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.882 00:50:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.882 00:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.882 00:50:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.882 00:50:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:46.882 00:50:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:46.882 00:50:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:46.882 00:50:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:30:46.882 [2024-04-27 00:50:20.322129] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:46.882 [2024-04-27 00:50:20.322577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144526 ] 00:30:47.140 [2024-04-27 00:50:20.493681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.399 [2024-04-27 00:50:20.747105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.967 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:30:47.967 [2024-04-27 00:50:21.393773] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:30:47.967 [2024-04-27 00:50:21.394186] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:48.902 [2024-04-27 00:50:22.122286] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:30:49.161 ************************************ 00:30:49.161 END TEST dd_smaller_blocksize 00:30:49.161 ************************************ 00:30:49.161 00:50:22 -- common/autotest_common.sh@641 -- # es=244 00:30:49.161 00:50:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:49.161 00:50:22 -- common/autotest_common.sh@650 -- # es=116 00:30:49.161 00:50:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:49.161 00:50:22 -- common/autotest_common.sh@658 -- # es=1 00:30:49.161 00:50:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:49.161 00:30:49.161 real 0m2.283s 00:30:49.161 user 0m1.680s 00:30:49.161 sys 0m0.497s 00:30:49.161 00:50:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:49.161 00:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:49.161 00:50:22 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:30:49.161 00:50:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:49.161 00:50:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.161 00:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:49.161 ************************************ 00:30:49.161 START TEST dd_invalid_count 00:30:49.161 ************************************ 00:30:49.161 00:50:22 -- common/autotest_common.sh@1111 -- # invalid_count 00:30:49.161 00:50:22 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:30:49.161 00:50:22 -- common/autotest_common.sh@638 -- # local es=0 00:30:49.161 00:50:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:30:49.161 00:50:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.161 00:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.161 00:50:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.161 00:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.161 00:50:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.161 00:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.161 00:50:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.161 00:50:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:49.161 00:50:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:30:49.161 [2024-04-27 00:50:22.688334] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:30:49.162 00:50:22 -- common/autotest_common.sh@641 -- # es=22 00:30:49.162 00:50:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:49.162 00:50:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:49.162 00:50:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:49.162 00:30:49.162 real 0m0.121s 00:30:49.162 user 0m0.057s 00:30:49.162 sys 0m0.062s 00:30:49.162 00:50:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:49.162 00:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:49.162 ************************************ 00:30:49.162 END TEST dd_invalid_count 00:30:49.162 ************************************ 00:30:49.420 00:50:22 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:30:49.420 00:50:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:49.420 00:50:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.420 00:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:49.420 ************************************ 00:30:49.420 START TEST dd_invalid_oflag 00:30:49.420 ************************************ 00:30:49.420 00:50:22 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:30:49.420 00:50:22 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:30:49.420 00:50:22 -- common/autotest_common.sh@638 -- # local es=0 00:30:49.420 00:50:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:30:49.420 00:50:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.420 00:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.420 00:50:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.420 00:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.420 00:50:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.420 00:50:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.420 00:50:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.420 00:50:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:49.420 00:50:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:30:49.420 [2024-04-27 00:50:22.900429] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:30:49.420 00:50:22 -- common/autotest_common.sh@641 -- # es=22 00:30:49.420 00:50:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:49.420 00:50:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:49.420 00:50:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:49.420 00:30:49.420 real 0m0.120s 00:30:49.420 user 0m0.043s 00:30:49.420 sys 0m0.075s 00:30:49.420 00:50:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:49.420 00:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:49.421 ************************************ 00:30:49.421 END TEST dd_invalid_oflag 00:30:49.421 ************************************ 00:30:49.421 00:50:22 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:30:49.421 00:50:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:49.421 00:50:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.421 00:50:22 -- common/autotest_common.sh@10 -- # set +x 00:30:49.680 ************************************ 00:30:49.680 START TEST dd_invalid_iflag 00:30:49.680 ************************************ 00:30:49.680 00:50:23 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:30:49.680 00:50:23 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:30:49.680 00:50:23 -- common/autotest_common.sh@638 -- # local es=0 00:30:49.680 00:50:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:30:49.680 00:50:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.680 00:50:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.680 00:50:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:49.680 00:50:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:30:49.680 [2024-04-27 00:50:23.101612] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:30:49.680 00:50:23 -- common/autotest_common.sh@641 -- # es=22 00:30:49.680 00:50:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:49.680 00:50:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:30:49.680 00:50:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:49.680 00:30:49.680 real 0m0.114s 00:30:49.680 user 0m0.052s 00:30:49.680 sys 0m0.061s 00:30:49.680 00:50:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:49.680 00:50:23 -- common/autotest_common.sh@10 -- # set +x 00:30:49.680 ************************************ 00:30:49.680 END TEST dd_invalid_iflag 00:30:49.680 ************************************ 00:30:49.680 00:50:23 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:30:49.680 00:50:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:49.680 00:50:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:49.680 00:50:23 -- common/autotest_common.sh@10 -- # set +x 00:30:49.680 ************************************ 00:30:49.680 START TEST dd_unknown_flag 00:30:49.680 ************************************ 00:30:49.680 00:50:23 -- common/autotest_common.sh@1111 -- # unknown_flag 00:30:49.680 00:50:23 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:30:49.680 00:50:23 -- common/autotest_common.sh@638 -- # local es=0 00:30:49.680 00:50:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:30:49.680 00:50:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.680 00:50:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:49.680 00:50:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:49.680 00:50:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:49.680 00:50:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:30:49.939 [2024-04-27 00:50:23.300941] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:49.939 [2024-04-27 00:50:23.301641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144674 ] 00:30:49.939 [2024-04-27 00:50:23.464141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.198 [2024-04-27 00:50:23.688817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.456 [2024-04-27 00:50:24.000172] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:30:50.456 [2024-04-27 00:50:24.000277] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:50.456  Copying: 0/0 [B] (average 0 Bps)[2024-04-27 00:50:24.000478] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:30:51.390 [2024-04-27 00:50:24.715398] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:30:51.649 00:30:51.649 00:30:51.649 ************************************ 00:30:51.649 END TEST dd_unknown_flag 00:30:51.649 ************************************ 00:30:51.649 00:50:25 -- common/autotest_common.sh@641 -- # es=234 00:30:51.649 00:50:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:51.649 00:50:25 -- common/autotest_common.sh@650 -- # es=106 00:30:51.649 00:50:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:51.649 00:50:25 -- common/autotest_common.sh@658 -- # es=1 00:30:51.649 00:50:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:51.649 00:30:51.649 real 0m1.940s 00:30:51.649 user 0m1.527s 00:30:51.649 sys 0m0.280s 00:30:51.649 00:50:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:51.649 00:50:25 -- common/autotest_common.sh@10 -- # set +x 00:30:51.649 00:50:25 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:30:51.649 00:50:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:51.649 00:50:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:51.649 00:50:25 -- common/autotest_common.sh@10 -- # set +x 00:30:51.908 ************************************ 00:30:51.908 START TEST dd_invalid_json 00:30:51.908 ************************************ 00:30:51.908 00:50:25 -- common/autotest_common.sh@1111 -- # invalid_json 00:30:51.908 00:50:25 -- dd/negative_dd.sh@95 -- # : 00:30:51.908 00:50:25 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:51.908 00:50:25 -- common/autotest_common.sh@638 -- # local es=0 00:30:51.908 00:50:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:51.908 00:50:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.908 00:50:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:51.908 00:50:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.908 00:50:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:51.908 00:50:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.908 00:50:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:30:51.909 00:50:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.909 00:50:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:30:51.909 00:50:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:30:51.909 [2024-04-27 00:50:25.325271] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:51.909 [2024-04-27 00:50:25.325469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144731 ] 00:30:51.909 [2024-04-27 00:50:25.494193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.166 [2024-04-27 00:50:25.709262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.166 [2024-04-27 00:50:25.709374] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:30:52.166 [2024-04-27 00:50:25.709415] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:52.166 [2024-04-27 00:50:25.709443] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:52.166 [2024-04-27 00:50:25.709550] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:30:52.733 ************************************ 00:30:52.733 END TEST dd_invalid_json 00:30:52.733 ************************************ 00:30:52.733 00:50:26 -- common/autotest_common.sh@641 -- # es=234 00:30:52.733 00:50:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:30:52.733 00:50:26 -- common/autotest_common.sh@650 -- # es=106 00:30:52.733 00:50:26 -- common/autotest_common.sh@651 -- # case "$es" in 00:30:52.733 00:50:26 -- common/autotest_common.sh@658 -- # es=1 00:30:52.733 00:50:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:30:52.733 00:30:52.733 real 0m0.861s 00:30:52.733 user 0m0.649s 00:30:52.733 sys 0m0.112s 00:30:52.733 00:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:52.733 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:30:52.733 00:30:52.733 real 0m7.231s 00:30:52.733 user 0m4.921s 00:30:52.733 sys 0m1.938s 00:30:52.733 00:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:52.733 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:30:52.733 ************************************ 00:30:52.733 END TEST spdk_dd_negative 00:30:52.733 ************************************ 00:30:52.733 00:30:52.733 real 2m42.062s 00:30:52.733 user 2m8.760s 00:30:52.733 sys 0m23.080s 00:30:52.733 00:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:52.733 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:30:52.733 ************************************ 00:30:52.733 END TEST spdk_dd 00:30:52.733 ************************************ 00:30:52.733 00:50:26 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:30:52.733 00:50:26 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:52.733 00:50:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:52.733 00:50:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:52.733 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:30:52.733 ************************************ 00:30:52.733 START TEST blockdev_nvme 00:30:52.733 ************************************ 00:30:52.733 00:50:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:30:52.992 * Looking for test storage... 00:30:52.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:30:52.992 00:50:26 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:30:52.992 00:50:26 -- bdev/nbd_common.sh@6 -- # set -e 00:30:52.992 00:50:26 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:52.992 00:50:26 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:52.992 00:50:26 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:30:52.992 00:50:26 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:30:52.992 00:50:26 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:30:52.992 00:50:26 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:30:52.992 00:50:26 -- bdev/blockdev.sh@20 -- # : 00:30:52.992 00:50:26 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:30:52.993 00:50:26 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:30:52.993 00:50:26 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:30:52.993 00:50:26 -- bdev/blockdev.sh@674 -- # uname -s 00:30:52.993 00:50:26 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:30:52.993 00:50:26 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:30:52.993 00:50:26 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:30:52.993 00:50:26 -- bdev/blockdev.sh@683 -- # crypto_device= 00:30:52.993 00:50:26 -- bdev/blockdev.sh@684 -- # dek= 00:30:52.993 00:50:26 -- bdev/blockdev.sh@685 -- # env_ctx= 00:30:52.993 00:50:26 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:30:52.993 00:50:26 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:30:52.993 00:50:26 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:30:52.993 00:50:26 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:30:52.993 00:50:26 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:30:52.993 00:50:26 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=144833 00:30:52.993 00:50:26 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:52.993 00:50:26 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:52.993 00:50:26 -- bdev/blockdev.sh@49 -- # waitforlisten 144833 00:30:52.993 00:50:26 -- common/autotest_common.sh@817 -- # '[' -z 144833 ']' 00:30:52.993 00:50:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.993 00:50:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:52.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.993 00:50:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.993 00:50:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:52.993 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:30:52.993 [2024-04-27 00:50:26.434983] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:52.993 [2024-04-27 00:50:26.435250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144833 ] 00:30:53.251 [2024-04-27 00:50:26.590596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.251 [2024-04-27 00:50:26.812189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.186 00:50:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:54.186 00:50:27 -- common/autotest_common.sh@850 -- # return 0 00:30:54.186 00:50:27 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:30:54.186 00:50:27 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:30:54.186 00:50:27 -- bdev/blockdev.sh@81 -- # local json 00:30:54.186 00:50:27 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:30:54.186 00:50:27 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:54.186 00:50:27 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:30:54.186 00:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.186 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.186 00:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.186 00:50:27 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:30:54.186 00:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.186 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.186 00:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.186 00:50:27 -- bdev/blockdev.sh@740 -- # cat 00:30:54.186 00:50:27 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:30:54.186 00:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.186 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.186 00:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.186 00:50:27 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:30:54.186 00:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.186 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.186 00:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.186 00:50:27 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:54.186 00:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.186 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.186 00:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.186 00:50:27 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:30:54.186 00:50:27 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:30:54.186 00:50:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:54.186 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:30:54.186 00:50:27 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:30:54.186 00:50:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:54.445 00:50:27 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:30:54.445 00:50:27 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0c828358-fa92-4b87-a328-374e2f2c4040"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0c828358-fa92-4b87-a328-374e2f2c4040",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:30:54.445 00:50:27 -- bdev/blockdev.sh@749 -- # jq -r .name 00:30:54.445 00:50:27 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:30:54.445 00:50:27 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:30:54.445 00:50:27 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:30:54.445 00:50:27 -- bdev/blockdev.sh@754 -- # killprocess 144833 00:30:54.445 00:50:27 -- common/autotest_common.sh@936 -- # '[' -z 144833 ']' 00:30:54.445 00:50:27 -- common/autotest_common.sh@940 -- # kill -0 144833 00:30:54.445 00:50:27 -- common/autotest_common.sh@941 -- # uname 00:30:54.445 00:50:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:54.445 00:50:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144833 00:30:54.445 00:50:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:54.445 00:50:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:54.445 killing process with pid 144833 00:30:54.445 00:50:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144833' 00:30:54.445 00:50:27 -- common/autotest_common.sh@955 -- # kill 144833 00:30:54.445 00:50:27 -- common/autotest_common.sh@960 -- # wait 144833 00:30:56.980 00:50:29 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:56.980 00:50:29 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:56.980 00:50:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:30:56.980 00:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:56.980 00:50:29 -- common/autotest_common.sh@10 -- # set +x 00:30:56.980 ************************************ 00:30:56.980 START TEST bdev_hello_world 00:30:56.980 ************************************ 00:30:56.980 00:50:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:30:56.980 [2024-04-27 00:50:30.066056] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:56.980 [2024-04-27 00:50:30.066263] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144922 ] 00:30:56.980 [2024-04-27 00:50:30.236661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.980 [2024-04-27 00:50:30.490154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.545 [2024-04-27 00:50:30.921898] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:57.545 [2024-04-27 00:50:30.921989] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:30:57.545 [2024-04-27 00:50:30.922029] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:57.545 [2024-04-27 00:50:30.925058] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:57.545 [2024-04-27 00:50:30.925638] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:57.545 [2024-04-27 00:50:30.925702] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:57.545 [2024-04-27 00:50:30.925974] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:57.545 00:30:57.545 [2024-04-27 00:50:30.926022] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:58.477 00:30:58.477 real 0m2.050s 00:30:58.477 user 0m1.711s 00:30:58.477 sys 0m0.239s 00:30:58.477 00:50:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:58.477 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:30:58.477 ************************************ 00:30:58.477 END TEST bdev_hello_world 00:30:58.477 ************************************ 00:30:58.736 00:50:32 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:30:58.736 00:50:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:58.736 00:50:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:58.736 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:30:58.736 ************************************ 00:30:58.736 START TEST bdev_bounds 00:30:58.736 ************************************ 00:30:58.736 00:50:32 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:30:58.736 00:50:32 -- bdev/blockdev.sh@290 -- # bdevio_pid=144976 00:30:58.736 00:50:32 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:58.736 00:50:32 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:30:58.736 Process bdevio pid: 144976 00:30:58.736 00:50:32 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 144976' 00:30:58.736 00:50:32 -- bdev/blockdev.sh@293 -- # waitforlisten 144976 00:30:58.736 00:50:32 -- common/autotest_common.sh@817 -- # '[' -z 144976 ']' 00:30:58.736 00:50:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.736 00:50:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:58.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.736 00:50:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.736 00:50:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:58.736 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:30:58.736 [2024-04-27 00:50:32.191317] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:30:58.736 [2024-04-27 00:50:32.191526] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144976 ] 00:30:58.994 [2024-04-27 00:50:32.374719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:59.253 [2024-04-27 00:50:32.633178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.253 [2024-04-27 00:50:32.633297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.253 [2024-04-27 00:50:32.633290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.817 00:50:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:59.817 00:50:33 -- common/autotest_common.sh@850 -- # return 0 00:30:59.817 00:50:33 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:59.817 I/O targets: 00:30:59.817 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:30:59.817 00:30:59.817 00:30:59.817 CUnit - A unit testing framework for C - Version 2.1-3 00:30:59.817 http://cunit.sourceforge.net/ 00:30:59.817 00:30:59.817 00:30:59.817 Suite: bdevio tests on: Nvme0n1 00:30:59.817 Test: blockdev write read block ...passed 00:30:59.817 Test: blockdev write zeroes read block ...passed 00:30:59.817 Test: blockdev write zeroes read no split ...passed 00:30:59.817 Test: blockdev write zeroes read split ...passed 00:30:59.817 Test: blockdev write zeroes read split partial ...passed 00:30:59.817 Test: blockdev reset ...[2024-04-27 00:50:33.337203] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:30:59.817 [2024-04-27 00:50:33.340739] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:59.817 passed 00:30:59.817 Test: blockdev write read 8 blocks ...passed 00:30:59.817 Test: blockdev write read size > 128k ...passed 00:30:59.817 Test: blockdev write read invalid size ...passed 00:30:59.817 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:59.817 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:59.817 Test: blockdev write read max offset ...passed 00:30:59.817 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:59.817 Test: blockdev writev readv 8 blocks ...passed 00:30:59.817 Test: blockdev writev readv 30 x 1block ...passed 00:30:59.817 Test: blockdev writev readv block ...passed 00:30:59.817 Test: blockdev writev readv size > 128k ...passed 00:30:59.817 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:59.817 Test: blockdev comparev and writev ...[2024-04-27 00:50:33.348097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1100d000 len:0x1000 00:30:59.817 [2024-04-27 00:50:33.348187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:30:59.817 passed 00:30:59.817 Test: blockdev nvme passthru rw ...passed 00:30:59.817 Test: blockdev nvme passthru vendor specific ...[2024-04-27 00:50:33.348997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:30:59.817 [2024-04-27 00:50:33.349064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:30:59.817 passed 00:30:59.818 Test: blockdev nvme admin passthru ...passed 00:30:59.818 Test: blockdev copy ...passed 00:30:59.818 00:30:59.818 Run Summary: Type Total Ran Passed Failed Inactive 00:30:59.818 suites 1 1 n/a 0 0 00:30:59.818 tests 23 23 23 0 0 00:30:59.818 asserts 152 152 152 0 n/a 00:30:59.818 00:30:59.818 Elapsed time = 0.209 seconds 00:30:59.818 0 00:30:59.818 00:50:33 -- bdev/blockdev.sh@295 -- # killprocess 144976 00:30:59.818 00:50:33 -- common/autotest_common.sh@936 -- # '[' -z 144976 ']' 00:30:59.818 00:50:33 -- common/autotest_common.sh@940 -- # kill -0 144976 00:30:59.818 00:50:33 -- common/autotest_common.sh@941 -- # uname 00:30:59.818 00:50:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:59.818 00:50:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 144976 00:30:59.818 00:50:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:59.818 00:50:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:59.818 killing process with pid 144976 00:30:59.818 00:50:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 144976' 00:30:59.818 00:50:33 -- common/autotest_common.sh@955 -- # kill 144976 00:30:59.818 00:50:33 -- common/autotest_common.sh@960 -- # wait 144976 00:31:01.191 00:50:34 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:31:01.191 00:31:01.191 real 0m2.418s 00:31:01.191 user 0m5.617s 00:31:01.191 sys 0m0.342s 00:31:01.192 ************************************ 00:31:01.192 END TEST bdev_bounds 00:31:01.192 ************************************ 00:31:01.192 00:50:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:01.192 00:50:34 -- common/autotest_common.sh@10 -- # set +x 00:31:01.192 00:50:34 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:31:01.192 00:50:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:31:01.192 00:50:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:01.192 00:50:34 -- common/autotest_common.sh@10 -- # set +x 00:31:01.192 ************************************ 00:31:01.192 START TEST bdev_nbd 00:31:01.192 ************************************ 00:31:01.192 00:50:34 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:31:01.192 00:50:34 -- bdev/blockdev.sh@300 -- # uname -s 00:31:01.192 00:50:34 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:31:01.192 00:50:34 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:01.192 00:50:34 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:01.192 00:50:34 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1') 00:31:01.192 00:50:34 -- bdev/blockdev.sh@304 -- # local bdev_all 00:31:01.192 00:50:34 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:31:01.192 00:50:34 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:31:01.192 00:50:34 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:01.192 00:50:34 -- bdev/blockdev.sh@311 -- # local nbd_all 00:31:01.192 00:50:34 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:31:01.192 00:50:34 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:31:01.192 00:50:34 -- bdev/blockdev.sh@314 -- # local nbd_list 00:31:01.192 00:50:34 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1') 00:31:01.192 00:50:34 -- bdev/blockdev.sh@315 -- # local bdev_list 00:31:01.192 00:50:34 -- bdev/blockdev.sh@318 -- # nbd_pid=145045 00:31:01.192 00:50:34 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:01.192 00:50:34 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:01.192 00:50:34 -- bdev/blockdev.sh@320 -- # waitforlisten 145045 /var/tmp/spdk-nbd.sock 00:31:01.192 00:50:34 -- common/autotest_common.sh@817 -- # '[' -z 145045 ']' 00:31:01.192 00:50:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:01.192 00:50:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:01.192 00:50:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:01.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:01.192 00:50:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:01.192 00:50:34 -- common/autotest_common.sh@10 -- # set +x 00:31:01.192 [2024-04-27 00:50:34.689454] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:01.192 [2024-04-27 00:50:34.689626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:01.450 [2024-04-27 00:50:34.849036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.708 [2024-04-27 00:50:35.064768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.275 00:50:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:02.275 00:50:35 -- common/autotest_common.sh@850 -- # return 0 00:31:02.275 00:50:35 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@24 -- # local i 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:02.275 00:50:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:31:02.534 00:50:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:02.534 00:50:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:02.534 00:50:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:02.534 00:50:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:02.534 00:50:36 -- common/autotest_common.sh@855 -- # local i 00:31:02.534 00:50:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:02.534 00:50:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:02.534 00:50:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:02.534 00:50:36 -- common/autotest_common.sh@859 -- # break 00:31:02.534 00:50:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:02.534 00:50:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:02.534 00:50:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:02.534 1+0 records in 00:31:02.534 1+0 records out 00:31:02.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679481 s, 6.0 MB/s 00:31:02.534 00:50:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:02.534 00:50:36 -- common/autotest_common.sh@872 -- # size=4096 00:31:02.534 00:50:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:02.534 00:50:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:02.534 00:50:36 -- common/autotest_common.sh@875 -- # return 0 00:31:02.534 00:50:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:02.534 00:50:36 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:02.534 00:50:36 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:02.792 00:50:36 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:02.792 { 00:31:02.792 "nbd_device": "/dev/nbd0", 00:31:02.792 "bdev_name": "Nvme0n1" 00:31:02.792 } 00:31:02.792 ]' 00:31:02.792 00:50:36 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:02.792 00:50:36 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:02.792 { 00:31:02.792 "nbd_device": "/dev/nbd0", 00:31:02.792 "bdev_name": "Nvme0n1" 00:31:02.793 } 00:31:02.793 ]' 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@51 -- # local i 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:02.793 00:50:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@41 -- # break 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@45 -- # return 0 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:03.051 00:50:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:03.309 00:50:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:03.309 00:50:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:03.309 00:50:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@65 -- # true 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@65 -- # count=0 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@122 -- # count=0 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@127 -- # return 0 00:31:03.567 00:50:36 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@12 -- # local i 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:03.567 00:50:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:31:03.825 /dev/nbd0 00:31:03.825 00:50:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:03.825 00:50:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:03.825 00:50:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:03.825 00:50:37 -- common/autotest_common.sh@855 -- # local i 00:31:03.825 00:50:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:03.825 00:50:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:03.826 00:50:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:03.826 00:50:37 -- common/autotest_common.sh@859 -- # break 00:31:03.826 00:50:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:03.826 00:50:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:03.826 00:50:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:03.826 1+0 records in 00:31:03.826 1+0 records out 00:31:03.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000477653 s, 8.6 MB/s 00:31:03.826 00:50:37 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:03.826 00:50:37 -- common/autotest_common.sh@872 -- # size=4096 00:31:03.826 00:50:37 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:03.826 00:50:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:03.826 00:50:37 -- common/autotest_common.sh@875 -- # return 0 00:31:03.826 00:50:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:03.826 00:50:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:03.826 00:50:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:03.826 00:50:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:03.826 00:50:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:04.084 { 00:31:04.084 "nbd_device": "/dev/nbd0", 00:31:04.084 "bdev_name": "Nvme0n1" 00:31:04.084 } 00:31:04.084 ]' 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:04.084 { 00:31:04.084 "nbd_device": "/dev/nbd0", 00:31:04.084 "bdev_name": "Nvme0n1" 00:31:04.084 } 00:31:04.084 ]' 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@65 -- # count=1 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@95 -- # count=1 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:04.084 256+0 records in 00:31:04.084 256+0 records out 00:31:04.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00801435 s, 131 MB/s 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:04.084 256+0 records in 00:31:04.084 256+0 records out 00:31:04.084 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.056756 s, 18.5 MB/s 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@51 -- # local i 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:04.084 00:50:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@41 -- # break 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@45 -- # return 0 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:04.650 00:50:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@65 -- # true 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@65 -- # count=0 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@104 -- # count=0 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@109 -- # return 0 00:31:04.908 00:50:38 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:04.908 00:50:38 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:04.909 00:50:38 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:05.167 malloc_lvol_verify 00:31:05.167 00:50:38 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:05.425 094cb801-39ee-46ab-a5ae-02bb1b45fae8 00:31:05.426 00:50:38 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:05.684 f35ef976-6c82-4886-becf-4f61b6ed4362 00:31:05.684 00:50:39 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:05.942 /dev/nbd0 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:05.942 mke2fs 1.46.5 (30-Dec-2021) 00:31:05.942 00:31:05.942 Filesystem too small for a journal 00:31:05.942 Discarding device blocks: 0/1024 done 00:31:05.942 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:05.942 00:31:05.942 Allocating group tables: 0/1 done 00:31:05.942 Writing inode tables: 0/1 done 00:31:05.942 Writing superblocks and filesystem accounting information: 0/1 done 00:31:05.942 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@51 -- # local i 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:05.942 00:50:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:06.200 00:50:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@41 -- # break 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@45 -- # return 0 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:06.458 00:50:39 -- bdev/nbd_common.sh@147 -- # return 0 00:31:06.459 00:50:39 -- bdev/blockdev.sh@326 -- # killprocess 145045 00:31:06.459 00:50:39 -- common/autotest_common.sh@936 -- # '[' -z 145045 ']' 00:31:06.459 00:50:39 -- common/autotest_common.sh@940 -- # kill -0 145045 00:31:06.459 00:50:39 -- common/autotest_common.sh@941 -- # uname 00:31:06.459 00:50:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:06.459 00:50:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145045 00:31:06.459 00:50:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:06.459 00:50:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:06.459 killing process with pid 145045 00:31:06.459 00:50:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 145045' 00:31:06.459 00:50:39 -- common/autotest_common.sh@955 -- # kill 145045 00:31:06.459 00:50:39 -- common/autotest_common.sh@960 -- # wait 145045 00:31:07.834 ************************************ 00:31:07.834 END TEST bdev_nbd 00:31:07.834 ************************************ 00:31:07.834 00:50:41 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:31:07.834 00:31:07.834 real 0m6.384s 00:31:07.834 user 0m9.399s 00:31:07.834 sys 0m1.285s 00:31:07.834 00:50:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:07.834 00:50:41 -- common/autotest_common.sh@10 -- # set +x 00:31:07.834 00:50:41 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:31:07.834 00:50:41 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:31:07.834 skipping fio tests on NVMe due to multi-ns failures. 00:31:07.834 00:50:41 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:31:07.834 00:50:41 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:07.834 00:50:41 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:07.834 00:50:41 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:31:07.834 00:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:07.834 00:50:41 -- common/autotest_common.sh@10 -- # set +x 00:31:07.834 ************************************ 00:31:07.834 START TEST bdev_verify 00:31:07.834 ************************************ 00:31:07.834 00:50:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:07.834 [2024-04-27 00:50:41.162350] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:07.834 [2024-04-27 00:50:41.162843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145249 ] 00:31:07.834 [2024-04-27 00:50:41.335728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:08.094 [2024-04-27 00:50:41.555959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.094 [2024-04-27 00:50:41.555959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.660 Running I/O for 5 seconds... 00:31:13.926 00:31:13.927 Latency(us) 00:31:13.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:13.927 Verification LBA range: start 0x0 length 0xa0000 00:31:13.927 Nvme0n1 : 5.01 10972.24 42.86 0.00 0.00 11600.84 997.93 20614.05 00:31:13.927 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:13.927 Verification LBA range: start 0xa0000 length 0xa0000 00:31:13.927 Nvme0n1 : 5.01 10864.51 42.44 0.00 0.00 11715.31 1020.28 23116.33 00:31:13.927 =================================================================================================================== 00:31:13.927 Total : 21836.75 85.30 0.00 0.00 11657.78 997.93 23116.33 00:31:14.866 00:31:14.866 real 0m7.342s 00:31:14.866 user 0m13.385s 00:31:14.866 sys 0m0.301s 00:31:14.866 00:50:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:14.866 00:50:48 -- common/autotest_common.sh@10 -- # set +x 00:31:14.866 ************************************ 00:31:14.866 END TEST bdev_verify 00:31:14.866 ************************************ 00:31:15.124 00:50:48 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:15.124 00:50:48 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:31:15.124 00:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:15.124 00:50:48 -- common/autotest_common.sh@10 -- # set +x 00:31:15.124 ************************************ 00:31:15.124 START TEST bdev_verify_big_io 00:31:15.124 ************************************ 00:31:15.124 00:50:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:15.124 [2024-04-27 00:50:48.565331] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:15.124 [2024-04-27 00:50:48.565532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145362 ] 00:31:15.380 [2024-04-27 00:50:48.727981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:15.380 [2024-04-27 00:50:48.951403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.380 [2024-04-27 00:50:48.951406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.945 Running I/O for 5 seconds... 00:31:21.209 00:31:21.209 Latency(us) 00:31:21.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.209 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:21.209 Verification LBA range: start 0x0 length 0xa000 00:31:21.209 Nvme0n1 : 5.09 789.01 49.31 0.00 0.00 158018.68 588.33 186837.18 00:31:21.209 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:21.209 Verification LBA range: start 0xa000 length 0xa000 00:31:21.209 Nvme0n1 : 5.09 820.79 51.30 0.00 0.00 151938.77 1325.61 181117.67 00:31:21.209 =================================================================================================================== 00:31:21.209 Total : 1609.80 100.61 0.00 0.00 154919.76 588.33 186837.18 00:31:22.583 00:31:22.583 real 0m7.442s 00:31:22.583 user 0m13.688s 00:31:22.583 sys 0m0.268s 00:31:22.583 00:50:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:22.583 00:50:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.583 ************************************ 00:31:22.583 END TEST bdev_verify_big_io 00:31:22.583 ************************************ 00:31:22.583 00:50:55 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:22.583 00:50:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:22.583 00:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:22.583 00:50:55 -- common/autotest_common.sh@10 -- # set +x 00:31:22.583 ************************************ 00:31:22.583 START TEST bdev_write_zeroes 00:31:22.583 ************************************ 00:31:22.583 00:50:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:22.583 [2024-04-27 00:50:56.106115] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:22.583 [2024-04-27 00:50:56.106610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145468 ] 00:31:22.842 [2024-04-27 00:50:56.274530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.110 [2024-04-27 00:50:56.481049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.391 Running I/O for 1 seconds... 00:31:24.771 00:31:24.771 Latency(us) 00:31:24.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.772 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:24.772 Nvme0n1 : 1.00 48129.55 188.01 0.00 0.00 2652.80 878.78 8936.73 00:31:24.772 =================================================================================================================== 00:31:24.772 Total : 48129.55 188.01 0.00 0.00 2652.80 878.78 8936.73 00:31:25.707 00:31:25.707 real 0m2.912s 00:31:25.707 user 0m2.570s 00:31:25.707 sys 0m0.242s 00:31:25.707 00:50:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.707 00:50:58 -- common/autotest_common.sh@10 -- # set +x 00:31:25.707 ************************************ 00:31:25.707 END TEST bdev_write_zeroes 00:31:25.707 ************************************ 00:31:25.707 00:50:58 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:25.707 00:50:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:25.707 00:50:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:25.707 00:50:58 -- common/autotest_common.sh@10 -- # set +x 00:31:25.707 ************************************ 00:31:25.707 START TEST bdev_json_nonenclosed 00:31:25.707 ************************************ 00:31:25.707 00:50:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:25.707 [2024-04-27 00:50:59.103713] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:25.707 [2024-04-27 00:50:59.104108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145529 ] 00:31:25.707 [2024-04-27 00:50:59.271348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.965 [2024-04-27 00:50:59.469885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.965 [2024-04-27 00:50:59.470045] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:25.965 [2024-04-27 00:50:59.470092] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:25.965 [2024-04-27 00:50:59.470118] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:26.531 00:31:26.531 real 0m0.802s 00:31:26.531 user 0m0.582s 00:31:26.531 sys 0m0.120s 00:31:26.531 00:50:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:26.531 00:50:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.531 ************************************ 00:31:26.531 END TEST bdev_json_nonenclosed 00:31:26.531 ************************************ 00:31:26.531 00:50:59 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:26.531 00:50:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:31:26.531 00:50:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:26.531 00:50:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.531 ************************************ 00:31:26.531 START TEST bdev_json_nonarray 00:31:26.531 ************************************ 00:31:26.531 00:50:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:26.531 [2024-04-27 00:50:59.988770] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:26.531 [2024-04-27 00:50:59.989157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145571 ] 00:31:26.790 [2024-04-27 00:51:00.162299] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.049 [2024-04-27 00:51:00.384098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.049 [2024-04-27 00:51:00.384270] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:27.049 [2024-04-27 00:51:00.384326] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:27.049 [2024-04-27 00:51:00.384353] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:27.307 00:31:27.307 real 0m0.848s 00:31:27.307 user 0m0.622s 00:31:27.307 sys 0m0.125s 00:31:27.307 00:51:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:27.307 00:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:27.307 ************************************ 00:31:27.307 END TEST bdev_json_nonarray 00:31:27.307 ************************************ 00:31:27.307 00:51:00 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:31:27.307 00:51:00 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:31:27.307 00:51:00 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:31:27.307 00:51:00 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:31:27.307 00:51:00 -- bdev/blockdev.sh@811 -- # cleanup 00:31:27.307 00:51:00 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:31:27.307 00:51:00 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:27.307 00:51:00 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:31:27.307 00:51:00 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:31:27.307 00:51:00 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:31:27.307 00:51:00 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:31:27.307 00:31:27.307 real 0m34.560s 00:31:27.307 user 0m51.628s 00:31:27.307 sys 0m3.793s 00:31:27.307 00:51:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:27.307 00:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:27.307 ************************************ 00:31:27.307 END TEST blockdev_nvme 00:31:27.307 ************************************ 00:31:27.307 00:51:00 -- spdk/autotest.sh@209 -- # uname -s 00:31:27.307 00:51:00 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:31:27.307 00:51:00 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:27.307 00:51:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:27.307 00:51:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:27.307 00:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:27.307 ************************************ 00:31:27.307 START TEST blockdev_nvme_gpt 00:31:27.307 ************************************ 00:31:27.307 00:51:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:31:27.565 * Looking for test storage... 00:31:27.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:27.565 00:51:00 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:27.565 00:51:00 -- bdev/nbd_common.sh@6 -- # set -e 00:31:27.565 00:51:00 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:27.565 00:51:00 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:27.565 00:51:00 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:27.565 00:51:00 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:27.565 00:51:00 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:27.565 00:51:00 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:27.565 00:51:00 -- bdev/blockdev.sh@20 -- # : 00:31:27.565 00:51:00 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:31:27.565 00:51:00 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:31:27.565 00:51:00 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:31:27.565 00:51:00 -- bdev/blockdev.sh@674 -- # uname -s 00:31:27.565 00:51:00 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:31:27.565 00:51:00 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:31:27.565 00:51:00 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:31:27.565 00:51:00 -- bdev/blockdev.sh@683 -- # crypto_device= 00:31:27.565 00:51:00 -- bdev/blockdev.sh@684 -- # dek= 00:31:27.565 00:51:00 -- bdev/blockdev.sh@685 -- # env_ctx= 00:31:27.565 00:51:00 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:31:27.565 00:51:00 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:31:27.565 00:51:00 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:31:27.565 00:51:00 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:31:27.565 00:51:00 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:31:27.565 00:51:00 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=145660 00:31:27.565 00:51:00 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:27.565 00:51:00 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:27.565 00:51:00 -- bdev/blockdev.sh@49 -- # waitforlisten 145660 00:31:27.565 00:51:00 -- common/autotest_common.sh@817 -- # '[' -z 145660 ']' 00:31:27.565 00:51:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.565 00:51:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:27.565 00:51:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.565 00:51:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:27.565 00:51:00 -- common/autotest_common.sh@10 -- # set +x 00:31:27.565 [2024-04-27 00:51:01.021049] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:27.565 [2024-04-27 00:51:01.021419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145660 ] 00:31:27.823 [2024-04-27 00:51:01.187109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.081 [2024-04-27 00:51:01.465823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.648 00:51:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:28.648 00:51:02 -- common/autotest_common.sh@850 -- # return 0 00:31:28.648 00:51:02 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:31:28.648 00:51:02 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:31:28.648 00:51:02 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:28.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:29.165 Waiting for block devices as requested 00:31:29.165 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:29.165 00:51:02 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:31:29.165 00:51:02 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:31:29.165 00:51:02 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:31:29.165 00:51:02 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:31:29.165 00:51:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:31:29.165 00:51:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:31:29.165 00:51:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:29.165 00:51:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:29.165 00:51:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:29.165 00:51:02 -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme0/nvme0n1') 00:31:29.165 00:51:02 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:31:29.165 00:51:02 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:31:29.165 00:51:02 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:31:29.165 00:51:02 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:31:29.165 00:51:02 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:31:29.165 00:51:02 -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:31:29.165 00:51:02 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:31:29.165 BYT; 00:31:29.165 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:31:29.165 00:51:02 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:31:29.165 BYT; 00:31:29.165 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:31:29.165 00:51:02 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:31:29.165 00:51:02 -- bdev/blockdev.sh@116 -- # break 00:31:29.165 00:51:02 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:31:29.165 00:51:02 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:31:29.165 00:51:02 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:31:29.165 00:51:02 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:31:29.732 00:51:03 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:31:29.732 00:51:03 -- scripts/common.sh@408 -- # local spdk_guid 00:31:29.732 00:51:03 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:29.733 00:51:03 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:29.733 00:51:03 -- scripts/common.sh@413 -- # IFS='()' 00:31:29.733 00:51:03 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:31:29.733 00:51:03 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:29.733 00:51:03 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:31:29.733 00:51:03 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:29.733 00:51:03 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:29.733 00:51:03 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:31:29.733 00:51:03 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:31:29.733 00:51:03 -- scripts/common.sh@420 -- # local spdk_guid 00:31:29.733 00:51:03 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:31:29.733 00:51:03 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:29.733 00:51:03 -- scripts/common.sh@425 -- # IFS='()' 00:31:29.733 00:51:03 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:31:29.733 00:51:03 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:31:29.733 00:51:03 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:31:29.733 00:51:03 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:29.733 00:51:03 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:29.733 00:51:03 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:31:29.733 00:51:03 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:31:30.668 The operation has completed successfully. 00:31:30.668 00:51:04 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:31:31.603 The operation has completed successfully. 00:31:31.604 00:51:05 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:32.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:32.170 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:33.106 00:51:06 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:31:33.106 00:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.106 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:31:33.106 [] 00:31:33.106 00:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.106 00:51:06 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:31:33.106 00:51:06 -- bdev/blockdev.sh@81 -- # local json 00:31:33.106 00:51:06 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:31:33.106 00:51:06 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:33.106 00:51:06 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:31:33.106 00:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.106 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:31:33.106 00:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.106 00:51:06 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:31:33.106 00:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.106 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:31:33.106 00:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.106 00:51:06 -- bdev/blockdev.sh@740 -- # cat 00:31:33.106 00:51:06 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:31:33.106 00:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.106 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:31:33.106 00:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.106 00:51:06 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:31:33.106 00:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.106 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:31:33.106 00:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.106 00:51:06 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:33.106 00:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.106 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:31:33.106 00:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.106 00:51:06 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:31:33.106 00:51:06 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:31:33.106 00:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.106 00:51:06 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:31:33.106 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:31:33.106 00:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.106 00:51:06 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:31:33.106 00:51:06 -- bdev/blockdev.sh@749 -- # jq -r .name 00:31:33.106 00:51:06 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:31:33.106 00:51:06 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:31:33.106 00:51:06 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:31:33.106 00:51:06 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:31:33.106 00:51:06 -- bdev/blockdev.sh@754 -- # killprocess 145660 00:31:33.106 00:51:06 -- common/autotest_common.sh@936 -- # '[' -z 145660 ']' 00:31:33.106 00:51:06 -- common/autotest_common.sh@940 -- # kill -0 145660 00:31:33.106 00:51:06 -- common/autotest_common.sh@941 -- # uname 00:31:33.106 00:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:33.106 00:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 145660 00:31:33.365 00:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:33.365 00:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:33.365 killing process with pid 145660 00:31:33.365 00:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 145660' 00:31:33.365 00:51:06 -- common/autotest_common.sh@955 -- # kill 145660 00:31:33.365 00:51:06 -- common/autotest_common.sh@960 -- # wait 145660 00:31:35.269 00:51:08 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:35.270 00:51:08 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:31:35.270 00:51:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:31:35.270 00:51:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:35.270 00:51:08 -- common/autotest_common.sh@10 -- # set +x 00:31:35.270 ************************************ 00:31:35.270 START TEST bdev_hello_world 00:31:35.270 ************************************ 00:31:35.270 00:51:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:31:35.270 [2024-04-27 00:51:08.727628] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:35.270 [2024-04-27 00:51:08.727858] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146089 ] 00:31:35.528 [2024-04-27 00:51:08.897613] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.528 [2024-04-27 00:51:09.089613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.096 [2024-04-27 00:51:09.507149] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:36.096 [2024-04-27 00:51:09.507227] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:31:36.096 [2024-04-27 00:51:09.507287] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:36.096 [2024-04-27 00:51:09.510311] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:36.096 [2024-04-27 00:51:09.510850] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:36.096 [2024-04-27 00:51:09.510914] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:36.096 [2024-04-27 00:51:09.511221] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:36.096 00:31:36.096 [2024-04-27 00:51:09.511286] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:37.031 00:31:37.031 real 0m1.920s 00:31:37.031 user 0m1.583s 00:31:37.031 sys 0m0.237s 00:31:37.031 00:51:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:37.031 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:31:37.031 ************************************ 00:31:37.031 END TEST bdev_hello_world 00:31:37.031 ************************************ 00:31:37.290 00:51:10 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:31:37.290 00:51:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:37.290 00:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:37.290 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 ************************************ 00:31:37.290 START TEST bdev_bounds 00:31:37.290 ************************************ 00:31:37.290 00:51:10 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:31:37.290 00:51:10 -- bdev/blockdev.sh@290 -- # bdevio_pid=146144 00:31:37.290 00:51:10 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:37.290 00:51:10 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:37.290 Process bdevio pid: 146144 00:31:37.290 00:51:10 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 146144' 00:31:37.290 00:51:10 -- bdev/blockdev.sh@293 -- # waitforlisten 146144 00:31:37.290 00:51:10 -- common/autotest_common.sh@817 -- # '[' -z 146144 ']' 00:31:37.290 00:51:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.290 00:51:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:37.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.290 00:51:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.290 00:51:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:37.290 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:31:37.290 [2024-04-27 00:51:10.728899] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:37.290 [2024-04-27 00:51:10.729127] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146144 ] 00:31:37.548 [2024-04-27 00:51:10.911823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:37.548 [2024-04-27 00:51:11.115649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.548 [2024-04-27 00:51:11.115777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.548 [2024-04-27 00:51:11.115774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.114 00:51:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:38.114 00:51:11 -- common/autotest_common.sh@850 -- # return 0 00:31:38.114 00:51:11 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:38.371 I/O targets: 00:31:38.371 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:31:38.371 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:31:38.371 00:31:38.371 00:31:38.371 CUnit - A unit testing framework for C - Version 2.1-3 00:31:38.371 http://cunit.sourceforge.net/ 00:31:38.371 00:31:38.371 00:31:38.371 Suite: bdevio tests on: Nvme0n1p2 00:31:38.371 Test: blockdev write read block ...passed 00:31:38.371 Test: blockdev write zeroes read block ...passed 00:31:38.371 Test: blockdev write zeroes read no split ...passed 00:31:38.371 Test: blockdev write zeroes read split ...passed 00:31:38.371 Test: blockdev write zeroes read split partial ...passed 00:31:38.371 Test: blockdev reset ...[2024-04-27 00:51:11.858677] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:31:38.371 [2024-04-27 00:51:11.862075] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:38.371 passed 00:31:38.371 Test: blockdev write read 8 blocks ...passed 00:31:38.371 Test: blockdev write read size > 128k ...passed 00:31:38.371 Test: blockdev write read invalid size ...passed 00:31:38.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:38.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:38.371 Test: blockdev write read max offset ...passed 00:31:38.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:38.371 Test: blockdev writev readv 8 blocks ...passed 00:31:38.371 Test: blockdev writev readv 30 x 1block ...passed 00:31:38.371 Test: blockdev writev readv block ...passed 00:31:38.371 Test: blockdev writev readv size > 128k ...passed 00:31:38.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:38.371 Test: blockdev comparev and writev ...[2024-04-27 00:51:11.869483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x29a0b000 len:0x1000 00:31:38.371 [2024-04-27 00:51:11.869572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:38.371 passed 00:31:38.371 Test: blockdev nvme passthru rw ...passed 00:31:38.371 Test: blockdev nvme passthru vendor specific ...passed 00:31:38.371 Test: blockdev nvme admin passthru ...passed 00:31:38.371 Test: blockdev copy ...passed 00:31:38.371 Suite: bdevio tests on: Nvme0n1p1 00:31:38.371 Test: blockdev write read block ...passed 00:31:38.371 Test: blockdev write zeroes read block ...passed 00:31:38.371 Test: blockdev write zeroes read no split ...passed 00:31:38.371 Test: blockdev write zeroes read split ...passed 00:31:38.371 Test: blockdev write zeroes read split partial ...passed 00:31:38.371 Test: blockdev reset ...[2024-04-27 00:51:11.919671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:31:38.371 [2024-04-27 00:51:11.922641] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:38.371 passed 00:31:38.371 Test: blockdev write read 8 blocks ...passed 00:31:38.371 Test: blockdev write read size > 128k ...passed 00:31:38.371 Test: blockdev write read invalid size ...passed 00:31:38.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:38.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:38.371 Test: blockdev write read max offset ...passed 00:31:38.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:38.371 Test: blockdev writev readv 8 blocks ...passed 00:31:38.371 Test: blockdev writev readv 30 x 1block ...passed 00:31:38.371 Test: blockdev writev readv block ...passed 00:31:38.371 Test: blockdev writev readv size > 128k ...passed 00:31:38.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:38.371 Test: blockdev comparev and writev ...[2024-04-27 00:51:11.930489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x29a0d000 len:0x1000 00:31:38.371 [2024-04-27 00:51:11.930578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:31:38.371 passed 00:31:38.371 Test: blockdev nvme passthru rw ...passed 00:31:38.371 Test: blockdev nvme passthru vendor specific ...passed 00:31:38.371 Test: blockdev nvme admin passthru ...passed 00:31:38.371 Test: blockdev copy ...passed 00:31:38.371 00:31:38.371 Run Summary: Type Total Ran Passed Failed Inactive 00:31:38.371 suites 2 2 n/a 0 0 00:31:38.371 tests 46 46 46 0 0 00:31:38.371 asserts 284 284 284 0 n/a 00:31:38.371 00:31:38.371 Elapsed time = 0.351 seconds 00:31:38.371 0 00:31:38.371 00:51:11 -- bdev/blockdev.sh@295 -- # killprocess 146144 00:31:38.371 00:51:11 -- common/autotest_common.sh@936 -- # '[' -z 146144 ']' 00:31:38.371 00:51:11 -- common/autotest_common.sh@940 -- # kill -0 146144 00:31:38.371 00:51:11 -- common/autotest_common.sh@941 -- # uname 00:31:38.371 00:51:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:38.371 00:51:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146144 00:31:38.629 00:51:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:38.629 00:51:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:38.629 killing process with pid 146144 00:31:38.629 00:51:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146144' 00:31:38.629 00:51:11 -- common/autotest_common.sh@955 -- # kill 146144 00:31:38.629 00:51:11 -- common/autotest_common.sh@960 -- # wait 146144 00:31:40.022 00:51:13 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:31:40.023 00:31:40.023 real 0m2.506s 00:31:40.023 user 0m5.928s 00:31:40.023 sys 0m0.333s 00:31:40.023 00:51:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:40.023 00:51:13 -- common/autotest_common.sh@10 -- # set +x 00:31:40.023 ************************************ 00:31:40.023 END TEST bdev_bounds 00:31:40.023 ************************************ 00:31:40.023 00:51:13 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:31:40.023 00:51:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:31:40.023 00:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:40.023 00:51:13 -- common/autotest_common.sh@10 -- # set +x 00:31:40.023 ************************************ 00:31:40.023 START TEST bdev_nbd 00:31:40.023 ************************************ 00:31:40.023 00:51:13 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:31:40.023 00:51:13 -- bdev/blockdev.sh@300 -- # uname -s 00:31:40.023 00:51:13 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:31:40.023 00:51:13 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:40.023 00:51:13 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:40.023 00:51:13 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:31:40.023 00:51:13 -- bdev/blockdev.sh@304 -- # local bdev_all 00:31:40.023 00:51:13 -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:31:40.023 00:51:13 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:31:40.023 00:51:13 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:40.023 00:51:13 -- bdev/blockdev.sh@311 -- # local nbd_all 00:31:40.023 00:51:13 -- bdev/blockdev.sh@312 -- # bdev_num=2 00:31:40.023 00:51:13 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:40.023 00:51:13 -- bdev/blockdev.sh@314 -- # local nbd_list 00:31:40.023 00:51:13 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:40.023 00:51:13 -- bdev/blockdev.sh@315 -- # local bdev_list 00:31:40.023 00:51:13 -- bdev/blockdev.sh@318 -- # nbd_pid=146211 00:31:40.023 00:51:13 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:40.023 00:51:13 -- bdev/blockdev.sh@320 -- # waitforlisten 146211 /var/tmp/spdk-nbd.sock 00:31:40.023 00:51:13 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:40.023 00:51:13 -- common/autotest_common.sh@817 -- # '[' -z 146211 ']' 00:31:40.023 00:51:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:40.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:40.023 00:51:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:40.023 00:51:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:40.023 00:51:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:40.023 00:51:13 -- common/autotest_common.sh@10 -- # set +x 00:31:40.023 [2024-04-27 00:51:13.327965] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:40.023 [2024-04-27 00:51:13.328191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.023 [2024-04-27 00:51:13.487695] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.281 [2024-04-27 00:51:13.729052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.847 00:51:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:40.847 00:51:14 -- common/autotest_common.sh@850 -- # return 0 00:31:40.847 00:51:14 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@24 -- # local i 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:40.847 00:51:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:31:41.106 00:51:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:41.106 00:51:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:41.106 00:51:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:41.106 00:51:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:41.106 00:51:14 -- common/autotest_common.sh@855 -- # local i 00:31:41.106 00:51:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:41.106 00:51:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:41.106 00:51:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:41.106 00:51:14 -- common/autotest_common.sh@859 -- # break 00:31:41.106 00:51:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:41.106 00:51:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:41.106 00:51:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:41.106 1+0 records in 00:31:41.106 1+0 records out 00:31:41.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770873 s, 5.3 MB/s 00:31:41.106 00:51:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.106 00:51:14 -- common/autotest_common.sh@872 -- # size=4096 00:31:41.106 00:51:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.106 00:51:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:41.106 00:51:14 -- common/autotest_common.sh@875 -- # return 0 00:31:41.106 00:51:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:41.106 00:51:14 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:41.106 00:51:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:31:41.364 00:51:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:41.364 00:51:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:41.364 00:51:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:41.364 00:51:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:41.365 00:51:14 -- common/autotest_common.sh@855 -- # local i 00:31:41.365 00:51:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:41.365 00:51:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:41.365 00:51:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:41.365 00:51:14 -- common/autotest_common.sh@859 -- # break 00:31:41.365 00:51:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:41.365 00:51:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:41.365 00:51:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:41.365 1+0 records in 00:31:41.365 1+0 records out 00:31:41.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425103 s, 9.6 MB/s 00:31:41.365 00:51:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.365 00:51:14 -- common/autotest_common.sh@872 -- # size=4096 00:31:41.365 00:51:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.365 00:51:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:41.365 00:51:14 -- common/autotest_common.sh@875 -- # return 0 00:31:41.365 00:51:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:41.365 00:51:14 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:31:41.365 00:51:14 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:41.623 00:51:15 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:41.623 { 00:31:41.623 "nbd_device": "/dev/nbd0", 00:31:41.623 "bdev_name": "Nvme0n1p1" 00:31:41.623 }, 00:31:41.623 { 00:31:41.623 "nbd_device": "/dev/nbd1", 00:31:41.623 "bdev_name": "Nvme0n1p2" 00:31:41.623 } 00:31:41.623 ]' 00:31:41.623 00:51:15 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:41.623 00:51:15 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:41.623 { 00:31:41.623 "nbd_device": "/dev/nbd0", 00:31:41.623 "bdev_name": "Nvme0n1p1" 00:31:41.623 }, 00:31:41.623 { 00:31:41.623 "nbd_device": "/dev/nbd1", 00:31:41.623 "bdev_name": "Nvme0n1p2" 00:31:41.623 } 00:31:41.623 ]' 00:31:41.623 00:51:15 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@51 -- # local i 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@41 -- # break 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@45 -- # return 0 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:41.881 00:51:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@41 -- # break 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@45 -- # return 0 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.140 00:51:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:42.397 00:51:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:42.656 00:51:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:42.656 00:51:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@65 -- # true 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@65 -- # count=0 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@122 -- # count=0 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@127 -- # return 0 00:31:42.656 00:51:16 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@12 -- # local i 00:31:42.656 00:51:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:42.657 00:51:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:42.657 00:51:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:31:42.915 /dev/nbd0 00:31:42.915 00:51:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:42.915 00:51:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:42.915 00:51:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:42.915 00:51:16 -- common/autotest_common.sh@855 -- # local i 00:31:42.915 00:51:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:42.915 00:51:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:42.915 00:51:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:42.915 00:51:16 -- common/autotest_common.sh@859 -- # break 00:31:42.915 00:51:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:42.915 00:51:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:42.915 00:51:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:42.915 1+0 records in 00:31:42.915 1+0 records out 00:31:42.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654547 s, 6.3 MB/s 00:31:42.915 00:51:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:42.915 00:51:16 -- common/autotest_common.sh@872 -- # size=4096 00:31:42.915 00:51:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:42.915 00:51:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:42.915 00:51:16 -- common/autotest_common.sh@875 -- # return 0 00:31:42.915 00:51:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:42.915 00:51:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:42.915 00:51:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:31:43.173 /dev/nbd1 00:31:43.173 00:51:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:43.173 00:51:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:43.173 00:51:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:31:43.173 00:51:16 -- common/autotest_common.sh@855 -- # local i 00:31:43.173 00:51:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:43.173 00:51:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:43.173 00:51:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:31:43.173 00:51:16 -- common/autotest_common.sh@859 -- # break 00:31:43.173 00:51:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:43.173 00:51:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:43.173 00:51:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:43.173 1+0 records in 00:31:43.173 1+0 records out 00:31:43.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143187 s, 2.9 MB/s 00:31:43.173 00:51:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.173 00:51:16 -- common/autotest_common.sh@872 -- # size=4096 00:31:43.173 00:51:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:43.174 00:51:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:43.174 00:51:16 -- common/autotest_common.sh@875 -- # return 0 00:31:43.174 00:51:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:43.174 00:51:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:43.174 00:51:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:43.174 00:51:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.174 00:51:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:43.431 00:51:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:43.431 { 00:31:43.431 "nbd_device": "/dev/nbd0", 00:31:43.431 "bdev_name": "Nvme0n1p1" 00:31:43.431 }, 00:31:43.431 { 00:31:43.431 "nbd_device": "/dev/nbd1", 00:31:43.431 "bdev_name": "Nvme0n1p2" 00:31:43.431 } 00:31:43.431 ]' 00:31:43.431 00:51:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:43.431 { 00:31:43.431 "nbd_device": "/dev/nbd0", 00:31:43.431 "bdev_name": "Nvme0n1p1" 00:31:43.431 }, 00:31:43.432 { 00:31:43.432 "nbd_device": "/dev/nbd1", 00:31:43.432 "bdev_name": "Nvme0n1p2" 00:31:43.432 } 00:31:43.432 ]' 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:43.432 /dev/nbd1' 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:43.432 /dev/nbd1' 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@65 -- # count=2 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@95 -- # count=2 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:43.432 256+0 records in 00:31:43.432 256+0 records out 00:31:43.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00937116 s, 112 MB/s 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:43.432 00:51:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:43.690 256+0 records in 00:31:43.690 256+0 records out 00:31:43.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0961785 s, 10.9 MB/s 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:43.690 256+0 records in 00:31:43.690 256+0 records out 00:31:43.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0940448 s, 11.1 MB/s 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@51 -- # local i 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:43.690 00:51:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@41 -- # break 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@45 -- # return 0 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:43.949 00:51:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@41 -- # break 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@45 -- # return 0 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:44.208 00:51:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@65 -- # true 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@65 -- # count=0 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@104 -- # count=0 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@109 -- # return 0 00:31:44.466 00:51:17 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:44.466 00:51:17 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:44.724 malloc_lvol_verify 00:31:44.724 00:51:18 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:44.982 eb352c12-37fd-4c36-aefe-956e7cb8092f 00:31:44.982 00:51:18 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:45.240 3208820c-0027-4ad4-8486-92f5f0c3fb83 00:31:45.240 00:51:18 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:45.532 /dev/nbd0 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:45.532 mke2fs 1.46.5 (30-Dec-2021) 00:31:45.532 00:31:45.532 Filesystem too small for a journal 00:31:45.532 Discarding device blocks: 0/1024 done 00:31:45.532 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:45.532 00:31:45.532 Allocating group tables: 0/1 done 00:31:45.532 Writing inode tables: 0/1 done 00:31:45.532 Writing superblocks and filesystem accounting information: 0/1 done 00:31:45.532 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@51 -- # local i 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:45.532 00:51:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:45.790 00:51:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:45.790 00:51:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:45.790 00:51:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:45.790 00:51:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:45.790 00:51:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:45.790 00:51:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:46.049 00:51:19 -- bdev/nbd_common.sh@41 -- # break 00:31:46.049 00:51:19 -- bdev/nbd_common.sh@45 -- # return 0 00:31:46.049 00:51:19 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:46.049 00:51:19 -- bdev/nbd_common.sh@147 -- # return 0 00:31:46.049 00:51:19 -- bdev/blockdev.sh@326 -- # killprocess 146211 00:31:46.049 00:51:19 -- common/autotest_common.sh@936 -- # '[' -z 146211 ']' 00:31:46.049 00:51:19 -- common/autotest_common.sh@940 -- # kill -0 146211 00:31:46.049 00:51:19 -- common/autotest_common.sh@941 -- # uname 00:31:46.049 00:51:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:46.049 00:51:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146211 00:31:46.049 00:51:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:46.049 00:51:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:46.049 00:51:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146211' 00:31:46.049 killing process with pid 146211 00:31:46.049 00:51:19 -- common/autotest_common.sh@955 -- # kill 146211 00:31:46.049 00:51:19 -- common/autotest_common.sh@960 -- # wait 146211 00:31:46.985 00:51:20 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:31:46.985 00:31:46.985 real 0m7.304s 00:31:46.985 user 0m10.590s 00:31:46.985 sys 0m1.752s 00:31:46.985 00:51:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:46.985 00:51:20 -- common/autotest_common.sh@10 -- # set +x 00:31:46.985 ************************************ 00:31:46.985 END TEST bdev_nbd 00:31:46.985 ************************************ 00:31:47.244 00:51:20 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:31:47.244 00:51:20 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:31:47.244 00:51:20 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:31:47.244 00:51:20 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:31:47.244 skipping fio tests on NVMe due to multi-ns failures. 00:31:47.244 00:51:20 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:47.244 00:51:20 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:47.244 00:51:20 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:31:47.244 00:51:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:47.244 00:51:20 -- common/autotest_common.sh@10 -- # set +x 00:31:47.244 ************************************ 00:31:47.244 START TEST bdev_verify 00:31:47.244 ************************************ 00:31:47.244 00:51:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:47.244 [2024-04-27 00:51:20.724235] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:47.244 [2024-04-27 00:51:20.724641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146473 ] 00:31:47.502 [2024-04-27 00:51:20.886861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:47.762 [2024-04-27 00:51:21.096573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.762 [2024-04-27 00:51:21.096577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.020 Running I/O for 5 seconds... 00:31:53.286 00:31:53.286 Latency(us) 00:31:53.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.286 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:53.286 Verification LBA range: start 0x0 length 0x4ff80 00:31:53.286 Nvme0n1p1 : 5.02 5087.38 19.87 0.00 0.00 25062.38 1995.87 36223.53 00:31:53.286 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:53.286 Verification LBA range: start 0x4ff80 length 0x4ff80 00:31:53.286 Nvme0n1p1 : 5.02 5186.71 20.26 0.00 0.00 24581.04 1630.95 30265.72 00:31:53.286 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:53.286 Verification LBA range: start 0x0 length 0x4ff7f 00:31:53.286 Nvme0n1p2 : 5.02 5095.34 19.90 0.00 0.00 24987.50 1280.93 38606.66 00:31:53.286 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:53.286 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:31:53.286 Nvme0n1p2 : 5.03 5195.41 20.29 0.00 0.00 24509.49 934.63 26929.34 00:31:53.286 =================================================================================================================== 00:31:53.286 Total : 20564.84 80.33 0.00 0.00 24782.68 934.63 38606.66 00:31:54.688 00:31:54.688 real 0m7.288s 00:31:54.688 user 0m13.351s 00:31:54.688 sys 0m0.262s 00:31:54.688 00:51:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:54.688 ************************************ 00:31:54.688 END TEST bdev_verify 00:31:54.688 ************************************ 00:31:54.688 00:51:27 -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 00:51:27 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:54.688 00:51:27 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:31:54.688 00:51:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:54.688 00:51:27 -- common/autotest_common.sh@10 -- # set +x 00:31:54.688 ************************************ 00:31:54.688 START TEST bdev_verify_big_io 00:31:54.688 ************************************ 00:31:54.688 00:51:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:54.688 [2024-04-27 00:51:28.108449] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:31:54.688 [2024-04-27 00:51:28.108909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146580 ] 00:31:54.947 [2024-04-27 00:51:28.276115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:54.947 [2024-04-27 00:51:28.499428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.947 [2024-04-27 00:51:28.499442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.514 Running I/O for 5 seconds... 00:32:00.776 00:32:00.776 Latency(us) 00:32:00.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.776 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:00.776 Verification LBA range: start 0x0 length 0x4ff8 00:32:00.776 Nvme0n1p1 : 5.22 367.74 22.98 0.00 0.00 340099.38 11021.96 360328.84 00:32:00.776 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:00.776 Verification LBA range: start 0x4ff8 length 0x4ff8 00:32:00.776 Nvme0n1p1 : 5.21 393.14 24.57 0.00 0.00 318829.19 6911.07 348889.83 00:32:00.776 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:00.776 Verification LBA range: start 0x0 length 0x4ff7 00:32:00.776 Nvme0n1p2 : 5.22 370.16 23.14 0.00 0.00 325531.00 580.89 337450.82 00:32:00.776 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:00.776 Verification LBA range: start 0x4ff7 length 0x4ff7 00:32:00.776 Nvme0n1p2 : 5.21 392.75 24.55 0.00 0.00 308395.82 1675.64 274536.26 00:32:00.776 =================================================================================================================== 00:32:00.776 Total : 1523.80 95.24 0.00 0.00 322908.75 580.89 360328.84 00:32:02.209 ************************************ 00:32:02.209 END TEST bdev_verify_big_io 00:32:02.209 ************************************ 00:32:02.209 00:32:02.209 real 0m7.622s 00:32:02.209 user 0m13.998s 00:32:02.209 sys 0m0.314s 00:32:02.209 00:51:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:02.209 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:32:02.209 00:51:35 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:02.209 00:51:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:32:02.209 00:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:02.209 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:32:02.209 ************************************ 00:32:02.209 START TEST bdev_write_zeroes 00:32:02.209 ************************************ 00:32:02.209 00:51:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:02.468 [2024-04-27 00:51:35.835403] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:32:02.468 [2024-04-27 00:51:35.836096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146694 ] 00:32:02.468 [2024-04-27 00:51:36.012134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.726 [2024-04-27 00:51:36.216802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.291 Running I/O for 1 seconds... 00:32:04.223 00:32:04.223 Latency(us) 00:32:04.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.223 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:04.223 Nvme0n1p1 : 1.00 27250.73 106.45 0.00 0.00 4687.42 2338.44 31933.91 00:32:04.223 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:04.223 Nvme0n1p2 : 1.01 27161.03 106.10 0.00 0.00 4695.20 2412.92 24665.37 00:32:04.223 =================================================================================================================== 00:32:04.223 Total : 54411.76 212.55 0.00 0.00 4691.30 2338.44 31933.91 00:32:05.157 ************************************ 00:32:05.157 END TEST bdev_write_zeroes 00:32:05.157 ************************************ 00:32:05.157 00:32:05.157 real 0m2.979s 00:32:05.157 user 0m2.637s 00:32:05.157 sys 0m0.240s 00:32:05.157 00:51:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:05.157 00:51:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.415 00:51:38 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:05.415 00:51:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:32:05.415 00:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:05.415 00:51:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.415 ************************************ 00:32:05.415 START TEST bdev_json_nonenclosed 00:32:05.415 ************************************ 00:32:05.415 00:51:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:05.415 [2024-04-27 00:51:38.902837] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:32:05.415 [2024-04-27 00:51:38.903273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146761 ] 00:32:05.674 [2024-04-27 00:51:39.073111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.010 [2024-04-27 00:51:39.326793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.010 [2024-04-27 00:51:39.327181] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:06.010 [2024-04-27 00:51:39.327401] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:06.010 [2024-04-27 00:51:39.327612] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:06.295 ************************************ 00:32:06.295 END TEST bdev_json_nonenclosed 00:32:06.295 ************************************ 00:32:06.295 00:32:06.295 real 0m0.947s 00:32:06.295 user 0m0.694s 00:32:06.295 sys 0m0.152s 00:32:06.295 00:51:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:06.295 00:51:39 -- common/autotest_common.sh@10 -- # set +x 00:32:06.295 00:51:39 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:06.295 00:51:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:32:06.295 00:51:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:06.295 00:51:39 -- common/autotest_common.sh@10 -- # set +x 00:32:06.295 ************************************ 00:32:06.295 START TEST bdev_json_nonarray 00:32:06.295 ************************************ 00:32:06.295 00:51:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:06.554 [2024-04-27 00:51:39.925448] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:32:06.554 [2024-04-27 00:51:39.925684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146797 ] 00:32:06.554 [2024-04-27 00:51:40.100696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.814 [2024-04-27 00:51:40.356251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.814 [2024-04-27 00:51:40.356387] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:06.814 [2024-04-27 00:51:40.356436] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:06.814 [2024-04-27 00:51:40.356469] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:07.380 ************************************ 00:32:07.381 END TEST bdev_json_nonarray 00:32:07.381 ************************************ 00:32:07.381 00:32:07.381 real 0m0.923s 00:32:07.381 user 0m0.657s 00:32:07.381 sys 0m0.162s 00:32:07.381 00:51:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:07.381 00:51:40 -- common/autotest_common.sh@10 -- # set +x 00:32:07.381 00:51:40 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:32:07.381 00:51:40 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:32:07.381 00:51:40 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:32:07.381 00:51:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:07.381 00:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:07.381 00:51:40 -- common/autotest_common.sh@10 -- # set +x 00:32:07.381 ************************************ 00:32:07.381 START TEST bdev_gpt_uuid 00:32:07.381 ************************************ 00:32:07.381 00:51:40 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:32:07.381 00:51:40 -- bdev/blockdev.sh@614 -- # local bdev 00:32:07.381 00:51:40 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:32:07.381 00:51:40 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=146839 00:32:07.381 00:51:40 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:32:07.381 00:51:40 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:07.381 00:51:40 -- bdev/blockdev.sh@49 -- # waitforlisten 146839 00:32:07.381 00:51:40 -- common/autotest_common.sh@817 -- # '[' -z 146839 ']' 00:32:07.381 00:51:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.381 00:51:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:07.381 00:51:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.381 00:51:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:07.381 00:51:40 -- common/autotest_common.sh@10 -- # set +x 00:32:07.381 [2024-04-27 00:51:40.945831] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:32:07.381 [2024-04-27 00:51:40.946048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146839 ] 00:32:07.639 [2024-04-27 00:51:41.112466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.898 [2024-04-27 00:51:41.304578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.465 00:51:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:08.465 00:51:42 -- common/autotest_common.sh@850 -- # return 0 00:32:08.465 00:51:42 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:08.465 00:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.465 00:51:42 -- common/autotest_common.sh@10 -- # set +x 00:32:08.723 Some configs were skipped because the RPC state that can call them passed over. 00:32:08.723 00:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.723 00:51:42 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:32:08.723 00:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.723 00:51:42 -- common/autotest_common.sh@10 -- # set +x 00:32:08.723 00:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.723 00:51:42 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:32:08.723 00:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.723 00:51:42 -- common/autotest_common.sh@10 -- # set +x 00:32:08.723 00:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.723 00:51:42 -- bdev/blockdev.sh@621 -- # bdev='[ 00:32:08.723 { 00:32:08.723 "name": "Nvme0n1p1", 00:32:08.723 "aliases": [ 00:32:08.723 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:32:08.723 ], 00:32:08.723 "product_name": "GPT Disk", 00:32:08.723 "block_size": 4096, 00:32:08.723 "num_blocks": 655104, 00:32:08.723 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:32:08.723 "assigned_rate_limits": { 00:32:08.723 "rw_ios_per_sec": 0, 00:32:08.723 "rw_mbytes_per_sec": 0, 00:32:08.723 "r_mbytes_per_sec": 0, 00:32:08.723 "w_mbytes_per_sec": 0 00:32:08.723 }, 00:32:08.723 "claimed": false, 00:32:08.723 "zoned": false, 00:32:08.723 "supported_io_types": { 00:32:08.723 "read": true, 00:32:08.723 "write": true, 00:32:08.723 "unmap": true, 00:32:08.723 "write_zeroes": true, 00:32:08.723 "flush": true, 00:32:08.723 "reset": true, 00:32:08.723 "compare": true, 00:32:08.723 "compare_and_write": false, 00:32:08.723 "abort": true, 00:32:08.723 "nvme_admin": false, 00:32:08.723 "nvme_io": false 00:32:08.723 }, 00:32:08.723 "driver_specific": { 00:32:08.723 "gpt": { 00:32:08.723 "base_bdev": "Nvme0n1", 00:32:08.723 "offset_blocks": 256, 00:32:08.723 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:32:08.723 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:32:08.723 "partition_name": "SPDK_TEST_first" 00:32:08.723 } 00:32:08.723 } 00:32:08.723 } 00:32:08.723 ]' 00:32:08.723 00:51:42 -- bdev/blockdev.sh@622 -- # jq -r length 00:32:08.723 00:51:42 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:32:08.723 00:51:42 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:32:08.723 00:51:42 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:32:08.723 00:51:42 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:32:08.982 00:51:42 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:32:08.982 00:51:42 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:32:08.982 00:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.982 00:51:42 -- common/autotest_common.sh@10 -- # set +x 00:32:08.982 00:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.982 00:51:42 -- bdev/blockdev.sh@626 -- # bdev='[ 00:32:08.982 { 00:32:08.982 "name": "Nvme0n1p2", 00:32:08.982 "aliases": [ 00:32:08.982 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:32:08.982 ], 00:32:08.982 "product_name": "GPT Disk", 00:32:08.982 "block_size": 4096, 00:32:08.982 "num_blocks": 655103, 00:32:08.982 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:32:08.982 "assigned_rate_limits": { 00:32:08.982 "rw_ios_per_sec": 0, 00:32:08.982 "rw_mbytes_per_sec": 0, 00:32:08.982 "r_mbytes_per_sec": 0, 00:32:08.982 "w_mbytes_per_sec": 0 00:32:08.982 }, 00:32:08.982 "claimed": false, 00:32:08.982 "zoned": false, 00:32:08.982 "supported_io_types": { 00:32:08.982 "read": true, 00:32:08.982 "write": true, 00:32:08.982 "unmap": true, 00:32:08.982 "write_zeroes": true, 00:32:08.982 "flush": true, 00:32:08.982 "reset": true, 00:32:08.982 "compare": true, 00:32:08.982 "compare_and_write": false, 00:32:08.982 "abort": true, 00:32:08.982 "nvme_admin": false, 00:32:08.982 "nvme_io": false 00:32:08.982 }, 00:32:08.982 "driver_specific": { 00:32:08.982 "gpt": { 00:32:08.982 "base_bdev": "Nvme0n1", 00:32:08.982 "offset_blocks": 655360, 00:32:08.982 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:32:08.982 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:32:08.982 "partition_name": "SPDK_TEST_second" 00:32:08.982 } 00:32:08.982 } 00:32:08.982 } 00:32:08.982 ]' 00:32:08.982 00:51:42 -- bdev/blockdev.sh@627 -- # jq -r length 00:32:08.982 00:51:42 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:32:08.982 00:51:42 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:32:08.982 00:51:42 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:32:08.982 00:51:42 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:32:08.982 00:51:42 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:32:08.982 00:51:42 -- bdev/blockdev.sh@631 -- # killprocess 146839 00:32:08.982 00:51:42 -- common/autotest_common.sh@936 -- # '[' -z 146839 ']' 00:32:08.982 00:51:42 -- common/autotest_common.sh@940 -- # kill -0 146839 00:32:08.982 00:51:42 -- common/autotest_common.sh@941 -- # uname 00:32:08.982 00:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:08.982 00:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 146839 00:32:08.982 00:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:08.982 00:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:08.982 00:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 146839' 00:32:08.982 killing process with pid 146839 00:32:08.982 00:51:42 -- common/autotest_common.sh@955 -- # kill 146839 00:32:08.982 00:51:42 -- common/autotest_common.sh@960 -- # wait 146839 00:32:11.512 ************************************ 00:32:11.512 END TEST bdev_gpt_uuid 00:32:11.512 ************************************ 00:32:11.512 00:32:11.512 real 0m3.763s 00:32:11.512 user 0m3.898s 00:32:11.512 sys 0m0.553s 00:32:11.512 00:51:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:11.512 00:51:44 -- common/autotest_common.sh@10 -- # set +x 00:32:11.512 00:51:44 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:32:11.512 00:51:44 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:32:11.512 00:51:44 -- bdev/blockdev.sh@811 -- # cleanup 00:32:11.512 00:51:44 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:11.512 00:51:44 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:11.512 00:51:44 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:32:11.512 00:51:44 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:32:11.512 00:51:44 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:32:11.512 00:51:44 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:11.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:11.512 Waiting for block devices as requested 00:32:11.512 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:11.771 00:51:45 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:32:11.771 00:51:45 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:32:11.771 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:32:11.771 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:32:11.771 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:32:11.771 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:32:11.771 00:51:45 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:32:11.771 00:32:11.771 real 0m44.256s 00:32:11.771 user 1m2.383s 00:32:11.771 sys 0m6.412s 00:32:11.771 00:51:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:11.771 ************************************ 00:32:11.771 END TEST blockdev_nvme_gpt 00:32:11.771 ************************************ 00:32:11.771 00:51:45 -- common/autotest_common.sh@10 -- # set +x 00:32:11.771 00:51:45 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:32:11.771 00:51:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:11.771 00:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:11.771 00:51:45 -- common/autotest_common.sh@10 -- # set +x 00:32:11.771 ************************************ 00:32:11.771 START TEST nvme 00:32:11.771 ************************************ 00:32:11.771 00:51:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:32:11.771 * Looking for test storage... 00:32:11.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:11.771 00:51:45 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:12.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:12.337 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:13.714 00:51:47 -- nvme/nvme.sh@79 -- # uname 00:32:13.972 00:51:47 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:32:13.972 00:51:47 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:32:13.972 00:51:47 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:32:13.972 00:51:47 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:32:13.973 00:51:47 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:32:13.973 00:51:47 -- common/autotest_common.sh@1055 -- # echo 0 00:32:13.973 00:51:47 -- common/autotest_common.sh@1057 -- # stubpid=147261 00:32:13.973 Waiting for stub to ready for secondary processes... 00:32:13.973 00:51:47 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:32:13.973 00:51:47 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:32:13.973 00:51:47 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:13.973 00:51:47 -- common/autotest_common.sh@1061 -- # [[ -e /proc/147261 ]] 00:32:13.973 00:51:47 -- common/autotest_common.sh@1062 -- # sleep 1s 00:32:13.973 [2024-04-27 00:51:47.364453] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:32:13.973 [2024-04-27 00:51:47.364734] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:32:14.908 00:51:48 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:14.908 00:51:48 -- common/autotest_common.sh@1061 -- # [[ -e /proc/147261 ]] 00:32:14.908 00:51:48 -- common/autotest_common.sh@1062 -- # sleep 1s 00:32:15.846 [2024-04-27 00:51:49.085562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:15.846 00:51:49 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:15.846 00:51:49 -- common/autotest_common.sh@1061 -- # [[ -e /proc/147261 ]] 00:32:15.846 00:51:49 -- common/autotest_common.sh@1062 -- # sleep 1s 00:32:15.846 [2024-04-27 00:51:49.323234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.846 [2024-04-27 00:51:49.323371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.846 [2024-04-27 00:51:49.323363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:15.846 [2024-04-27 00:51:49.335971] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:32:15.846 [2024-04-27 00:51:49.336096] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:32:15.846 [2024-04-27 00:51:49.348333] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:32:15.846 [2024-04-27 00:51:49.348977] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:32:16.781 00:51:50 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:32:16.781 done. 00:32:16.781 00:51:50 -- common/autotest_common.sh@1064 -- # echo done. 00:32:16.781 00:51:50 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:32:16.781 00:51:50 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:32:16.781 00:51:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:16.781 00:51:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.040 ************************************ 00:32:17.040 START TEST nvme_reset 00:32:17.040 ************************************ 00:32:17.040 00:51:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:32:17.299 Initializing NVMe Controllers 00:32:17.299 Skipping QEMU NVMe SSD at 0000:00:10.0 00:32:17.299 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:32:17.299 00:32:17.299 real 0m0.296s 00:32:17.299 user 0m0.112s 00:32:17.299 sys 0m0.107s 00:32:17.299 00:51:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:17.299 ************************************ 00:32:17.299 END TEST nvme_reset 00:32:17.299 00:51:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.299 ************************************ 00:32:17.299 00:51:50 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:32:17.299 00:51:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:17.299 00:51:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:17.299 00:51:50 -- common/autotest_common.sh@10 -- # set +x 00:32:17.299 ************************************ 00:32:17.299 START TEST nvme_identify 00:32:17.299 ************************************ 00:32:17.299 00:51:50 -- common/autotest_common.sh@1111 -- # nvme_identify 00:32:17.299 00:51:50 -- nvme/nvme.sh@12 -- # bdfs=() 00:32:17.299 00:51:50 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:32:17.299 00:51:50 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:32:17.299 00:51:50 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:32:17.299 00:51:50 -- common/autotest_common.sh@1499 -- # bdfs=() 00:32:17.299 00:51:50 -- common/autotest_common.sh@1499 -- # local bdfs 00:32:17.299 00:51:50 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:17.299 00:51:50 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:17.299 00:51:50 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:32:17.299 00:51:50 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:32:17.299 00:51:50 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:32:17.299 00:51:50 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:32:17.558 [2024-04-27 00:51:51.081943] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 147312 terminated unexpected 00:32:17.559 ===================================================== 00:32:17.559 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:17.559 ===================================================== 00:32:17.559 Controller Capabilities/Features 00:32:17.559 ================================ 00:32:17.559 Vendor ID: 1b36 00:32:17.559 Subsystem Vendor ID: 1af4 00:32:17.559 Serial Number: 12340 00:32:17.559 Model Number: QEMU NVMe Ctrl 00:32:17.559 Firmware Version: 8.0.0 00:32:17.559 Recommended Arb Burst: 6 00:32:17.559 IEEE OUI Identifier: 00 54 52 00:32:17.559 Multi-path I/O 00:32:17.559 May have multiple subsystem ports: No 00:32:17.559 May have multiple controllers: No 00:32:17.559 Associated with SR-IOV VF: No 00:32:17.559 Max Data Transfer Size: 524288 00:32:17.559 Max Number of Namespaces: 256 00:32:17.559 Max Number of I/O Queues: 64 00:32:17.559 NVMe Specification Version (VS): 1.4 00:32:17.559 NVMe Specification Version (Identify): 1.4 00:32:17.559 Maximum Queue Entries: 2048 00:32:17.559 Contiguous Queues Required: Yes 00:32:17.559 Arbitration Mechanisms Supported 00:32:17.559 Weighted Round Robin: Not Supported 00:32:17.559 Vendor Specific: Not Supported 00:32:17.559 Reset Timeout: 7500 ms 00:32:17.559 Doorbell Stride: 4 bytes 00:32:17.559 NVM Subsystem Reset: Not Supported 00:32:17.559 Command Sets Supported 00:32:17.559 NVM Command Set: Supported 00:32:17.559 Boot Partition: Not Supported 00:32:17.559 Memory Page Size Minimum: 4096 bytes 00:32:17.559 Memory Page Size Maximum: 65536 bytes 00:32:17.559 Persistent Memory Region: Not Supported 00:32:17.559 Optional Asynchronous Events Supported 00:32:17.559 Namespace Attribute Notices: Supported 00:32:17.559 Firmware Activation Notices: Not Supported 00:32:17.559 ANA Change Notices: Not Supported 00:32:17.559 PLE Aggregate Log Change Notices: Not Supported 00:32:17.559 LBA Status Info Alert Notices: Not Supported 00:32:17.559 EGE Aggregate Log Change Notices: Not Supported 00:32:17.559 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.559 Zone Descriptor Change Notices: Not Supported 00:32:17.559 Discovery Log Change Notices: Not Supported 00:32:17.559 Controller Attributes 00:32:17.559 128-bit Host Identifier: Not Supported 00:32:17.559 Non-Operational Permissive Mode: Not Supported 00:32:17.559 NVM Sets: Not Supported 00:32:17.559 Read Recovery Levels: Not Supported 00:32:17.559 Endurance Groups: Not Supported 00:32:17.559 Predictable Latency Mode: Not Supported 00:32:17.559 Traffic Based Keep ALive: Not Supported 00:32:17.559 Namespace Granularity: Not Supported 00:32:17.559 SQ Associations: Not Supported 00:32:17.559 UUID List: Not Supported 00:32:17.559 Multi-Domain Subsystem: Not Supported 00:32:17.559 Fixed Capacity Management: Not Supported 00:32:17.559 Variable Capacity Management: Not Supported 00:32:17.559 Delete Endurance Group: Not Supported 00:32:17.559 Delete NVM Set: Not Supported 00:32:17.559 Extended LBA Formats Supported: Supported 00:32:17.559 Flexible Data Placement Supported: Not Supported 00:32:17.559 00:32:17.559 Controller Memory Buffer Support 00:32:17.559 ================================ 00:32:17.559 Supported: No 00:32:17.559 00:32:17.559 Persistent Memory Region Support 00:32:17.559 ================================ 00:32:17.559 Supported: No 00:32:17.559 00:32:17.559 Admin Command Set Attributes 00:32:17.559 ============================ 00:32:17.559 Security Send/Receive: Not Supported 00:32:17.559 Format NVM: Supported 00:32:17.559 Firmware Activate/Download: Not Supported 00:32:17.559 Namespace Management: Supported 00:32:17.559 Device Self-Test: Not Supported 00:32:17.559 Directives: Supported 00:32:17.559 NVMe-MI: Not Supported 00:32:17.559 Virtualization Management: Not Supported 00:32:17.559 Doorbell Buffer Config: Supported 00:32:17.559 Get LBA Status Capability: Not Supported 00:32:17.559 Command & Feature Lockdown Capability: Not Supported 00:32:17.559 Abort Command Limit: 4 00:32:17.559 Async Event Request Limit: 4 00:32:17.559 Number of Firmware Slots: N/A 00:32:17.559 Firmware Slot 1 Read-Only: N/A 00:32:17.559 Firmware Activation Without Reset: N/A 00:32:17.559 Multiple Update Detection Support: N/A 00:32:17.559 Firmware Update Granularity: No Information Provided 00:32:17.559 Per-Namespace SMART Log: Yes 00:32:17.559 Asymmetric Namespace Access Log Page: Not Supported 00:32:17.559 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:17.559 Command Effects Log Page: Supported 00:32:17.559 Get Log Page Extended Data: Supported 00:32:17.559 Telemetry Log Pages: Not Supported 00:32:17.559 Persistent Event Log Pages: Not Supported 00:32:17.559 Supported Log Pages Log Page: May Support 00:32:17.559 Commands Supported & Effects Log Page: Not Supported 00:32:17.559 Feature Identifiers & Effects Log Page:May Support 00:32:17.559 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.559 Data Area 4 for Telemetry Log: Not Supported 00:32:17.559 Error Log Page Entries Supported: 1 00:32:17.559 Keep Alive: Not Supported 00:32:17.559 00:32:17.559 NVM Command Set Attributes 00:32:17.559 ========================== 00:32:17.559 Submission Queue Entry Size 00:32:17.559 Max: 64 00:32:17.559 Min: 64 00:32:17.559 Completion Queue Entry Size 00:32:17.559 Max: 16 00:32:17.559 Min: 16 00:32:17.559 Number of Namespaces: 256 00:32:17.559 Compare Command: Supported 00:32:17.559 Write Uncorrectable Command: Not Supported 00:32:17.559 Dataset Management Command: Supported 00:32:17.559 Write Zeroes Command: Supported 00:32:17.559 Set Features Save Field: Supported 00:32:17.559 Reservations: Not Supported 00:32:17.559 Timestamp: Supported 00:32:17.559 Copy: Supported 00:32:17.559 Volatile Write Cache: Present 00:32:17.559 Atomic Write Unit (Normal): 1 00:32:17.559 Atomic Write Unit (PFail): 1 00:32:17.559 Atomic Compare & Write Unit: 1 00:32:17.559 Fused Compare & Write: Not Supported 00:32:17.559 Scatter-Gather List 00:32:17.559 SGL Command Set: Supported 00:32:17.559 SGL Keyed: Not Supported 00:32:17.559 SGL Bit Bucket Descriptor: Not Supported 00:32:17.559 SGL Metadata Pointer: Not Supported 00:32:17.559 Oversized SGL: Not Supported 00:32:17.559 SGL Metadata Address: Not Supported 00:32:17.559 SGL Offset: Not Supported 00:32:17.559 Transport SGL Data Block: Not Supported 00:32:17.559 Replay Protected Memory Block: Not Supported 00:32:17.559 00:32:17.559 Firmware Slot Information 00:32:17.559 ========================= 00:32:17.559 Active slot: 1 00:32:17.559 Slot 1 Firmware Revision: 1.0 00:32:17.559 00:32:17.559 00:32:17.559 Commands Supported and Effects 00:32:17.559 ============================== 00:32:17.559 Admin Commands 00:32:17.559 -------------- 00:32:17.559 Delete I/O Submission Queue (00h): Supported 00:32:17.559 Create I/O Submission Queue (01h): Supported 00:32:17.559 Get Log Page (02h): Supported 00:32:17.559 Delete I/O Completion Queue (04h): Supported 00:32:17.559 Create I/O Completion Queue (05h): Supported 00:32:17.559 Identify (06h): Supported 00:32:17.559 Abort (08h): Supported 00:32:17.559 Set Features (09h): Supported 00:32:17.559 Get Features (0Ah): Supported 00:32:17.559 Asynchronous Event Request (0Ch): Supported 00:32:17.559 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:17.559 Directive Send (19h): Supported 00:32:17.559 Directive Receive (1Ah): Supported 00:32:17.559 Virtualization Management (1Ch): Supported 00:32:17.559 Doorbell Buffer Config (7Ch): Supported 00:32:17.559 Format NVM (80h): Supported LBA-Change 00:32:17.559 I/O Commands 00:32:17.559 ------------ 00:32:17.559 Flush (00h): Supported LBA-Change 00:32:17.559 Write (01h): Supported LBA-Change 00:32:17.559 Read (02h): Supported 00:32:17.559 Compare (05h): Supported 00:32:17.559 Write Zeroes (08h): Supported LBA-Change 00:32:17.559 Dataset Management (09h): Supported LBA-Change 00:32:17.559 Unknown (0Ch): Supported 00:32:17.559 Unknown (12h): Supported 00:32:17.559 Copy (19h): Supported LBA-Change 00:32:17.559 Unknown (1Dh): Supported LBA-Change 00:32:17.559 00:32:17.559 Error Log 00:32:17.559 ========= 00:32:17.559 00:32:17.559 Arbitration 00:32:17.559 =========== 00:32:17.559 Arbitration Burst: no limit 00:32:17.559 00:32:17.559 Power Management 00:32:17.559 ================ 00:32:17.559 Number of Power States: 1 00:32:17.559 Current Power State: Power State #0 00:32:17.559 Power State #0: 00:32:17.559 Max Power: 25.00 W 00:32:17.559 Non-Operational State: Operational 00:32:17.559 Entry Latency: 16 microseconds 00:32:17.559 Exit Latency: 4 microseconds 00:32:17.559 Relative Read Throughput: 0 00:32:17.559 Relative Read Latency: 0 00:32:17.559 Relative Write Throughput: 0 00:32:17.559 Relative Write Latency: 0 00:32:17.559 Idle Power: Not Reported 00:32:17.560 Active Power: Not Reported 00:32:17.560 Non-Operational Permissive Mode: Not Supported 00:32:17.560 00:32:17.560 Health Information 00:32:17.560 ================== 00:32:17.560 Critical Warnings: 00:32:17.560 Available Spare Space: OK 00:32:17.560 Temperature: OK 00:32:17.560 Device Reliability: OK 00:32:17.560 Read Only: No 00:32:17.560 Volatile Memory Backup: OK 00:32:17.560 Current Temperature: 323 Kelvin (50 Celsius) 00:32:17.560 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:17.560 Available Spare: 0% 00:32:17.560 Available Spare Threshold: 0% 00:32:17.560 Life Percentage Used: 0% 00:32:17.560 Data Units Read: 4270 00:32:17.560 Data Units Written: 3942 00:32:17.560 Host Read Commands: 233055 00:32:17.560 Host Write Commands: 246220 00:32:17.560 Controller Busy Time: 0 minutes 00:32:17.560 Power Cycles: 0 00:32:17.560 Power On Hours: 0 hours 00:32:17.560 Unsafe Shutdowns: 0 00:32:17.560 Unrecoverable Media Errors: 0 00:32:17.560 Lifetime Error Log Entries: 0 00:32:17.560 Warning Temperature Time: 0 minutes 00:32:17.560 Critical Temperature Time: 0 minutes 00:32:17.560 00:32:17.560 Number of Queues 00:32:17.560 ================ 00:32:17.560 Number of I/O Submission Queues: 64 00:32:17.560 Number of I/O Completion Queues: 64 00:32:17.560 00:32:17.560 ZNS Specific Controller Data 00:32:17.560 ============================ 00:32:17.560 Zone Append Size Limit: 0 00:32:17.560 00:32:17.560 00:32:17.560 Active Namespaces 00:32:17.560 ================= 00:32:17.560 Namespace ID:1 00:32:17.560 Error Recovery Timeout: Unlimited 00:32:17.560 Command Set Identifier: NVM (00h) 00:32:17.560 Deallocate: Supported 00:32:17.560 Deallocated/Unwritten Error: Supported 00:32:17.560 Deallocated Read Value: All 0x00 00:32:17.560 Deallocate in Write Zeroes: Not Supported 00:32:17.560 Deallocated Guard Field: 0xFFFF 00:32:17.560 Flush: Supported 00:32:17.560 Reservation: Not Supported 00:32:17.560 Namespace Sharing Capabilities: Private 00:32:17.560 Size (in LBAs): 1310720 (5GiB) 00:32:17.560 Capacity (in LBAs): 1310720 (5GiB) 00:32:17.560 Utilization (in LBAs): 1310720 (5GiB) 00:32:17.560 Thin Provisioning: Not Supported 00:32:17.560 Per-NS Atomic Units: No 00:32:17.560 Maximum Single Source Range Length: 128 00:32:17.560 Maximum Copy Length: 128 00:32:17.560 Maximum Source Range Count: 128 00:32:17.560 NGUID/EUI64 Never Reused: No 00:32:17.560 Namespace Write Protected: No 00:32:17.560 Number of LBA Formats: 8 00:32:17.560 Current LBA Format: LBA Format #04 00:32:17.560 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:17.560 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:17.560 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:17.560 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:17.560 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:17.560 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:17.560 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:17.560 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:17.560 00:32:17.560 00:51:51 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:32:17.560 00:51:51 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:32:17.818 ===================================================== 00:32:17.819 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:17.819 ===================================================== 00:32:17.819 Controller Capabilities/Features 00:32:17.819 ================================ 00:32:17.819 Vendor ID: 1b36 00:32:17.819 Subsystem Vendor ID: 1af4 00:32:17.819 Serial Number: 12340 00:32:17.819 Model Number: QEMU NVMe Ctrl 00:32:17.819 Firmware Version: 8.0.0 00:32:17.819 Recommended Arb Burst: 6 00:32:17.819 IEEE OUI Identifier: 00 54 52 00:32:17.819 Multi-path I/O 00:32:17.819 May have multiple subsystem ports: No 00:32:17.819 May have multiple controllers: No 00:32:17.819 Associated with SR-IOV VF: No 00:32:17.819 Max Data Transfer Size: 524288 00:32:17.819 Max Number of Namespaces: 256 00:32:17.819 Max Number of I/O Queues: 64 00:32:17.819 NVMe Specification Version (VS): 1.4 00:32:17.819 NVMe Specification Version (Identify): 1.4 00:32:17.819 Maximum Queue Entries: 2048 00:32:17.819 Contiguous Queues Required: Yes 00:32:17.819 Arbitration Mechanisms Supported 00:32:17.819 Weighted Round Robin: Not Supported 00:32:17.819 Vendor Specific: Not Supported 00:32:17.819 Reset Timeout: 7500 ms 00:32:17.819 Doorbell Stride: 4 bytes 00:32:17.819 NVM Subsystem Reset: Not Supported 00:32:17.819 Command Sets Supported 00:32:17.819 NVM Command Set: Supported 00:32:17.819 Boot Partition: Not Supported 00:32:17.819 Memory Page Size Minimum: 4096 bytes 00:32:17.819 Memory Page Size Maximum: 65536 bytes 00:32:17.819 Persistent Memory Region: Not Supported 00:32:17.819 Optional Asynchronous Events Supported 00:32:17.819 Namespace Attribute Notices: Supported 00:32:17.819 Firmware Activation Notices: Not Supported 00:32:17.819 ANA Change Notices: Not Supported 00:32:17.819 PLE Aggregate Log Change Notices: Not Supported 00:32:17.819 LBA Status Info Alert Notices: Not Supported 00:32:17.819 EGE Aggregate Log Change Notices: Not Supported 00:32:17.819 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.819 Zone Descriptor Change Notices: Not Supported 00:32:17.819 Discovery Log Change Notices: Not Supported 00:32:17.819 Controller Attributes 00:32:17.819 128-bit Host Identifier: Not Supported 00:32:17.819 Non-Operational Permissive Mode: Not Supported 00:32:17.819 NVM Sets: Not Supported 00:32:17.819 Read Recovery Levels: Not Supported 00:32:17.819 Endurance Groups: Not Supported 00:32:17.819 Predictable Latency Mode: Not Supported 00:32:17.819 Traffic Based Keep ALive: Not Supported 00:32:17.819 Namespace Granularity: Not Supported 00:32:17.819 SQ Associations: Not Supported 00:32:17.819 UUID List: Not Supported 00:32:17.819 Multi-Domain Subsystem: Not Supported 00:32:17.819 Fixed Capacity Management: Not Supported 00:32:17.819 Variable Capacity Management: Not Supported 00:32:17.819 Delete Endurance Group: Not Supported 00:32:17.819 Delete NVM Set: Not Supported 00:32:17.819 Extended LBA Formats Supported: Supported 00:32:17.819 Flexible Data Placement Supported: Not Supported 00:32:17.819 00:32:17.819 Controller Memory Buffer Support 00:32:17.819 ================================ 00:32:17.819 Supported: No 00:32:17.819 00:32:17.819 Persistent Memory Region Support 00:32:17.819 ================================ 00:32:17.819 Supported: No 00:32:17.819 00:32:17.819 Admin Command Set Attributes 00:32:17.819 ============================ 00:32:17.819 Security Send/Receive: Not Supported 00:32:17.819 Format NVM: Supported 00:32:17.819 Firmware Activate/Download: Not Supported 00:32:17.819 Namespace Management: Supported 00:32:17.819 Device Self-Test: Not Supported 00:32:17.819 Directives: Supported 00:32:17.819 NVMe-MI: Not Supported 00:32:17.819 Virtualization Management: Not Supported 00:32:17.819 Doorbell Buffer Config: Supported 00:32:17.819 Get LBA Status Capability: Not Supported 00:32:17.819 Command & Feature Lockdown Capability: Not Supported 00:32:17.819 Abort Command Limit: 4 00:32:17.819 Async Event Request Limit: 4 00:32:17.819 Number of Firmware Slots: N/A 00:32:17.819 Firmware Slot 1 Read-Only: N/A 00:32:17.819 Firmware Activation Without Reset: N/A 00:32:17.819 Multiple Update Detection Support: N/A 00:32:17.819 Firmware Update Granularity: No Information Provided 00:32:17.819 Per-Namespace SMART Log: Yes 00:32:17.819 Asymmetric Namespace Access Log Page: Not Supported 00:32:17.819 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:32:17.819 Command Effects Log Page: Supported 00:32:17.819 Get Log Page Extended Data: Supported 00:32:17.819 Telemetry Log Pages: Not Supported 00:32:17.819 Persistent Event Log Pages: Not Supported 00:32:17.819 Supported Log Pages Log Page: May Support 00:32:17.819 Commands Supported & Effects Log Page: Not Supported 00:32:17.819 Feature Identifiers & Effects Log Page:May Support 00:32:17.819 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.819 Data Area 4 for Telemetry Log: Not Supported 00:32:17.819 Error Log Page Entries Supported: 1 00:32:17.819 Keep Alive: Not Supported 00:32:17.819 00:32:17.819 NVM Command Set Attributes 00:32:17.819 ========================== 00:32:17.819 Submission Queue Entry Size 00:32:17.819 Max: 64 00:32:17.819 Min: 64 00:32:17.819 Completion Queue Entry Size 00:32:17.819 Max: 16 00:32:17.819 Min: 16 00:32:17.819 Number of Namespaces: 256 00:32:17.819 Compare Command: Supported 00:32:17.819 Write Uncorrectable Command: Not Supported 00:32:17.819 Dataset Management Command: Supported 00:32:17.819 Write Zeroes Command: Supported 00:32:17.819 Set Features Save Field: Supported 00:32:17.819 Reservations: Not Supported 00:32:17.819 Timestamp: Supported 00:32:17.819 Copy: Supported 00:32:17.819 Volatile Write Cache: Present 00:32:17.819 Atomic Write Unit (Normal): 1 00:32:17.819 Atomic Write Unit (PFail): 1 00:32:17.819 Atomic Compare & Write Unit: 1 00:32:17.819 Fused Compare & Write: Not Supported 00:32:17.819 Scatter-Gather List 00:32:17.819 SGL Command Set: Supported 00:32:17.819 SGL Keyed: Not Supported 00:32:17.819 SGL Bit Bucket Descriptor: Not Supported 00:32:17.819 SGL Metadata Pointer: Not Supported 00:32:17.819 Oversized SGL: Not Supported 00:32:17.819 SGL Metadata Address: Not Supported 00:32:17.819 SGL Offset: Not Supported 00:32:17.819 Transport SGL Data Block: Not Supported 00:32:17.819 Replay Protected Memory Block: Not Supported 00:32:17.819 00:32:17.819 Firmware Slot Information 00:32:17.819 ========================= 00:32:17.819 Active slot: 1 00:32:17.819 Slot 1 Firmware Revision: 1.0 00:32:17.819 00:32:17.819 00:32:17.819 Commands Supported and Effects 00:32:17.819 ============================== 00:32:17.819 Admin Commands 00:32:17.819 -------------- 00:32:17.819 Delete I/O Submission Queue (00h): Supported 00:32:17.819 Create I/O Submission Queue (01h): Supported 00:32:17.819 Get Log Page (02h): Supported 00:32:17.819 Delete I/O Completion Queue (04h): Supported 00:32:17.819 Create I/O Completion Queue (05h): Supported 00:32:17.819 Identify (06h): Supported 00:32:17.819 Abort (08h): Supported 00:32:17.819 Set Features (09h): Supported 00:32:17.819 Get Features (0Ah): Supported 00:32:17.819 Asynchronous Event Request (0Ch): Supported 00:32:17.819 Namespace Attachment (15h): Supported NS-Inventory-Change 00:32:17.819 Directive Send (19h): Supported 00:32:17.819 Directive Receive (1Ah): Supported 00:32:17.819 Virtualization Management (1Ch): Supported 00:32:17.819 Doorbell Buffer Config (7Ch): Supported 00:32:17.819 Format NVM (80h): Supported LBA-Change 00:32:17.819 I/O Commands 00:32:17.819 ------------ 00:32:17.819 Flush (00h): Supported LBA-Change 00:32:17.819 Write (01h): Supported LBA-Change 00:32:17.819 Read (02h): Supported 00:32:17.819 Compare (05h): Supported 00:32:17.819 Write Zeroes (08h): Supported LBA-Change 00:32:17.819 Dataset Management (09h): Supported LBA-Change 00:32:17.819 Unknown (0Ch): Supported 00:32:17.819 Unknown (12h): Supported 00:32:17.819 Copy (19h): Supported LBA-Change 00:32:17.819 Unknown (1Dh): Supported LBA-Change 00:32:17.819 00:32:17.819 Error Log 00:32:17.819 ========= 00:32:17.819 00:32:17.819 Arbitration 00:32:17.819 =========== 00:32:17.819 Arbitration Burst: no limit 00:32:17.819 00:32:17.819 Power Management 00:32:17.819 ================ 00:32:17.819 Number of Power States: 1 00:32:17.819 Current Power State: Power State #0 00:32:17.819 Power State #0: 00:32:17.819 Max Power: 25.00 W 00:32:17.819 Non-Operational State: Operational 00:32:17.819 Entry Latency: 16 microseconds 00:32:17.819 Exit Latency: 4 microseconds 00:32:17.819 Relative Read Throughput: 0 00:32:17.819 Relative Read Latency: 0 00:32:17.819 Relative Write Throughput: 0 00:32:17.819 Relative Write Latency: 0 00:32:18.079 Idle Power: Not Reported 00:32:18.079 Active Power: Not Reported 00:32:18.079 Non-Operational Permissive Mode: Not Supported 00:32:18.079 00:32:18.079 Health Information 00:32:18.079 ================== 00:32:18.079 Critical Warnings: 00:32:18.079 Available Spare Space: OK 00:32:18.079 Temperature: OK 00:32:18.079 Device Reliability: OK 00:32:18.079 Read Only: No 00:32:18.079 Volatile Memory Backup: OK 00:32:18.079 Current Temperature: 323 Kelvin (50 Celsius) 00:32:18.079 Temperature Threshold: 343 Kelvin (70 Celsius) 00:32:18.079 Available Spare: 0% 00:32:18.079 Available Spare Threshold: 0% 00:32:18.079 Life Percentage Used: 0% 00:32:18.079 Data Units Read: 4270 00:32:18.079 Data Units Written: 3942 00:32:18.079 Host Read Commands: 233055 00:32:18.079 Host Write Commands: 246220 00:32:18.079 Controller Busy Time: 0 minutes 00:32:18.079 Power Cycles: 0 00:32:18.079 Power On Hours: 0 hours 00:32:18.079 Unsafe Shutdowns: 0 00:32:18.079 Unrecoverable Media Errors: 0 00:32:18.079 Lifetime Error Log Entries: 0 00:32:18.079 Warning Temperature Time: 0 minutes 00:32:18.079 Critical Temperature Time: 0 minutes 00:32:18.079 00:32:18.079 Number of Queues 00:32:18.079 ================ 00:32:18.079 Number of I/O Submission Queues: 64 00:32:18.079 Number of I/O Completion Queues: 64 00:32:18.079 00:32:18.079 ZNS Specific Controller Data 00:32:18.079 ============================ 00:32:18.079 Zone Append Size Limit: 0 00:32:18.079 00:32:18.079 00:32:18.079 Active Namespaces 00:32:18.079 ================= 00:32:18.079 Namespace ID:1 00:32:18.079 Error Recovery Timeout: Unlimited 00:32:18.079 Command Set Identifier: NVM (00h) 00:32:18.079 Deallocate: Supported 00:32:18.079 Deallocated/Unwritten Error: Supported 00:32:18.079 Deallocated Read Value: All 0x00 00:32:18.079 Deallocate in Write Zeroes: Not Supported 00:32:18.079 Deallocated Guard Field: 0xFFFF 00:32:18.079 Flush: Supported 00:32:18.079 Reservation: Not Supported 00:32:18.079 Namespace Sharing Capabilities: Private 00:32:18.079 Size (in LBAs): 1310720 (5GiB) 00:32:18.079 Capacity (in LBAs): 1310720 (5GiB) 00:32:18.079 Utilization (in LBAs): 1310720 (5GiB) 00:32:18.079 Thin Provisioning: Not Supported 00:32:18.079 Per-NS Atomic Units: No 00:32:18.079 Maximum Single Source Range Length: 128 00:32:18.079 Maximum Copy Length: 128 00:32:18.079 Maximum Source Range Count: 128 00:32:18.079 NGUID/EUI64 Never Reused: No 00:32:18.079 Namespace Write Protected: No 00:32:18.079 Number of LBA Formats: 8 00:32:18.079 Current LBA Format: LBA Format #04 00:32:18.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:18.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:32:18.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:32:18.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:32:18.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:32:18.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:32:18.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:32:18.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:32:18.079 00:32:18.079 00:32:18.079 real 0m0.694s 00:32:18.079 user 0m0.341s 00:32:18.079 sys 0m0.263s 00:32:18.079 00:51:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:18.079 00:51:51 -- common/autotest_common.sh@10 -- # set +x 00:32:18.079 ************************************ 00:32:18.079 END TEST nvme_identify 00:32:18.079 ************************************ 00:32:18.079 00:51:51 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:32:18.079 00:51:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:18.079 00:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:18.079 00:51:51 -- common/autotest_common.sh@10 -- # set +x 00:32:18.079 ************************************ 00:32:18.079 START TEST nvme_perf 00:32:18.079 ************************************ 00:32:18.079 00:51:51 -- common/autotest_common.sh@1111 -- # nvme_perf 00:32:18.079 00:51:51 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:32:19.462 Initializing NVMe Controllers 00:32:19.462 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:19.462 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:19.462 Initialization complete. Launching workers. 00:32:19.462 ======================================================== 00:32:19.462 Latency(us) 00:32:19.462 Device Information : IOPS MiB/s Average min max 00:32:19.462 PCIE (0000:00:10.0) NSID 1 from core 0: 87416.31 1024.41 1463.03 626.14 6707.64 00:32:19.462 ======================================================== 00:32:19.462 Total : 87416.31 1024.41 1463.03 626.14 6707.64 00:32:19.462 00:32:19.462 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:19.462 ================================================================================= 00:32:19.462 1.00000% : 770.793us 00:32:19.462 10.00000% : 919.738us 00:32:19.462 25.00000% : 1109.644us 00:32:19.462 50.00000% : 1400.087us 00:32:19.462 75.00000% : 1683.084us 00:32:19.462 90.00000% : 1980.975us 00:32:19.462 95.00000% : 2532.073us 00:32:19.462 98.00000% : 3083.171us 00:32:19.462 99.00000% : 3291.695us 00:32:19.462 99.50000% : 3410.851us 00:32:19.462 99.90000% : 3991.738us 00:32:19.462 99.99000% : 6404.655us 00:32:19.462 99.99900% : 6732.335us 00:32:19.462 99.99990% : 6732.335us 00:32:19.462 99.99999% : 6732.335us 00:32:19.462 00:32:19.462 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:19.462 ============================================================================== 00:32:19.462 Range in us Cumulative IO count 00:32:19.462 625.571 - 629.295: 0.0011% ( 1) 00:32:19.462 629.295 - 633.018: 0.0023% ( 1) 00:32:19.462 636.742 - 640.465: 0.0034% ( 1) 00:32:19.462 651.636 - 655.360: 0.0046% ( 1) 00:32:19.462 655.360 - 659.084: 0.0057% ( 1) 00:32:19.462 659.084 - 662.807: 0.0080% ( 2) 00:32:19.462 666.531 - 670.255: 0.0103% ( 2) 00:32:19.462 670.255 - 673.978: 0.0114% ( 1) 00:32:19.462 673.978 - 677.702: 0.0126% ( 1) 00:32:19.462 677.702 - 681.425: 0.0149% ( 2) 00:32:19.462 681.425 - 685.149: 0.0183% ( 3) 00:32:19.462 685.149 - 688.873: 0.0229% ( 4) 00:32:19.462 688.873 - 692.596: 0.0309% ( 7) 00:32:19.462 692.596 - 696.320: 0.0366% ( 5) 00:32:19.462 696.320 - 700.044: 0.0480% ( 10) 00:32:19.462 700.044 - 703.767: 0.0583% ( 9) 00:32:19.462 703.767 - 707.491: 0.0744% ( 14) 00:32:19.462 707.491 - 711.215: 0.0835% ( 8) 00:32:19.462 711.215 - 714.938: 0.1064% ( 20) 00:32:19.462 714.938 - 718.662: 0.1235% ( 15) 00:32:19.462 718.662 - 722.385: 0.1487% ( 22) 00:32:19.462 722.385 - 726.109: 0.1842% ( 31) 00:32:19.462 726.109 - 729.833: 0.2059% ( 19) 00:32:19.462 729.833 - 733.556: 0.2539% ( 42) 00:32:19.462 733.556 - 737.280: 0.2951% ( 36) 00:32:19.462 737.280 - 741.004: 0.3477% ( 46) 00:32:19.462 741.004 - 744.727: 0.4049% ( 50) 00:32:19.462 744.727 - 748.451: 0.4736% ( 60) 00:32:19.462 748.451 - 752.175: 0.5628% ( 78) 00:32:19.462 752.175 - 755.898: 0.6474% ( 74) 00:32:19.462 755.898 - 759.622: 0.7469% ( 87) 00:32:19.462 759.622 - 763.345: 0.8407% ( 82) 00:32:19.462 763.345 - 767.069: 0.9517% ( 97) 00:32:19.462 767.069 - 770.793: 1.0798% ( 112) 00:32:19.462 770.793 - 774.516: 1.2056% ( 110) 00:32:19.462 774.516 - 778.240: 1.3429% ( 120) 00:32:19.462 778.240 - 781.964: 1.4607% ( 103) 00:32:19.462 781.964 - 785.687: 1.6025% ( 124) 00:32:19.462 785.687 - 789.411: 1.7684% ( 145) 00:32:19.462 789.411 - 793.135: 1.9411% ( 151) 00:32:19.462 793.135 - 796.858: 2.1344% ( 169) 00:32:19.462 796.858 - 800.582: 2.3186% ( 161) 00:32:19.462 800.582 - 804.305: 2.5027% ( 161) 00:32:19.462 804.305 - 808.029: 2.7041% ( 176) 00:32:19.462 808.029 - 811.753: 2.8802% ( 154) 00:32:19.462 811.753 - 815.476: 3.0907% ( 184) 00:32:19.462 815.476 - 819.200: 3.2977% ( 181) 00:32:19.462 819.200 - 822.924: 3.5322% ( 205) 00:32:19.462 822.924 - 826.647: 3.7575% ( 197) 00:32:19.462 826.647 - 830.371: 3.9852% ( 199) 00:32:19.462 830.371 - 834.095: 4.2185% ( 204) 00:32:19.462 834.095 - 837.818: 4.4690% ( 219) 00:32:19.462 837.818 - 841.542: 4.7138% ( 214) 00:32:19.462 841.542 - 845.265: 4.9449% ( 202) 00:32:19.462 845.265 - 848.989: 5.1839% ( 209) 00:32:19.462 848.989 - 852.713: 5.4527% ( 235) 00:32:19.462 852.713 - 856.436: 5.7044% ( 220) 00:32:19.462 856.436 - 860.160: 5.9732% ( 235) 00:32:19.462 860.160 - 863.884: 6.2020% ( 200) 00:32:19.462 863.884 - 867.607: 6.4673% ( 232) 00:32:19.462 867.607 - 871.331: 6.7258% ( 226) 00:32:19.462 871.331 - 875.055: 6.9741% ( 217) 00:32:19.462 875.055 - 878.778: 7.2371% ( 230) 00:32:19.462 878.778 - 882.502: 7.4934% ( 224) 00:32:19.462 882.502 - 886.225: 7.7553% ( 229) 00:32:19.462 886.225 - 889.949: 8.0058% ( 219) 00:32:19.462 889.949 - 893.673: 8.3032% ( 260) 00:32:19.462 893.673 - 897.396: 8.5514% ( 217) 00:32:19.462 897.396 - 901.120: 8.7962% ( 214) 00:32:19.462 901.120 - 904.844: 9.0524% ( 224) 00:32:19.462 904.844 - 908.567: 9.3258% ( 239) 00:32:19.462 908.567 - 912.291: 9.5958% ( 236) 00:32:19.462 912.291 - 916.015: 9.8463% ( 219) 00:32:19.462 916.015 - 919.738: 10.1048% ( 226) 00:32:19.462 919.738 - 923.462: 10.3633% ( 226) 00:32:19.462 923.462 - 927.185: 10.6550% ( 255) 00:32:19.462 927.185 - 930.909: 10.9169% ( 229) 00:32:19.462 930.909 - 934.633: 11.1869% ( 236) 00:32:19.462 934.633 - 938.356: 11.4717% ( 249) 00:32:19.462 938.356 - 942.080: 11.7714% ( 262) 00:32:19.462 942.080 - 945.804: 12.0539% ( 247) 00:32:19.462 945.804 - 949.527: 12.3273% ( 239) 00:32:19.462 949.527 - 953.251: 12.6087% ( 246) 00:32:19.462 953.251 - 960.698: 13.1886% ( 507) 00:32:19.462 960.698 - 968.145: 13.7662% ( 505) 00:32:19.462 968.145 - 975.593: 14.3691% ( 527) 00:32:19.462 975.593 - 983.040: 14.9524% ( 510) 00:32:19.462 983.040 - 990.487: 15.5678% ( 538) 00:32:19.462 990.487 - 997.935: 16.1580% ( 516) 00:32:19.462 997.935 - 1005.382: 16.7700% ( 535) 00:32:19.462 1005.382 - 1012.829: 17.3648% ( 520) 00:32:19.462 1012.829 - 1020.276: 18.0099% ( 564) 00:32:19.462 1020.276 - 1027.724: 18.6116% ( 526) 00:32:19.462 1027.724 - 1035.171: 19.2338% ( 544) 00:32:19.462 1035.171 - 1042.618: 19.8618% ( 549) 00:32:19.462 1042.618 - 1050.065: 20.4761% ( 537) 00:32:19.462 1050.065 - 1057.513: 21.1292% ( 571) 00:32:19.462 1057.513 - 1064.960: 21.7263% ( 522) 00:32:19.462 1064.960 - 1072.407: 22.3840% ( 575) 00:32:19.462 1072.407 - 1079.855: 22.9868% ( 527) 00:32:19.462 1079.855 - 1087.302: 23.6171% ( 551) 00:32:19.462 1087.302 - 1094.749: 24.2691% ( 570) 00:32:19.462 1094.749 - 1102.196: 24.8559% ( 513) 00:32:19.463 1102.196 - 1109.644: 25.5273% ( 587) 00:32:19.463 1109.644 - 1117.091: 26.1370% ( 533) 00:32:19.463 1117.091 - 1124.538: 26.7878% ( 569) 00:32:19.463 1124.538 - 1131.985: 27.4170% ( 550) 00:32:19.463 1131.985 - 1139.433: 28.0506% ( 554) 00:32:19.463 1139.433 - 1146.880: 28.6580% ( 531) 00:32:19.463 1146.880 - 1154.327: 29.2757% ( 540) 00:32:19.463 1154.327 - 1161.775: 29.9128% ( 557) 00:32:19.463 1161.775 - 1169.222: 30.5122% ( 524) 00:32:19.463 1169.222 - 1176.669: 31.1631% ( 569) 00:32:19.463 1176.669 - 1184.116: 31.7716% ( 532) 00:32:19.463 1184.116 - 1191.564: 32.3961% ( 546) 00:32:19.463 1191.564 - 1199.011: 33.0161% ( 542) 00:32:19.463 1199.011 - 1206.458: 33.6578% ( 561) 00:32:19.463 1206.458 - 1213.905: 34.2915% ( 554) 00:32:19.463 1213.905 - 1221.353: 34.9343% ( 562) 00:32:19.463 1221.353 - 1228.800: 35.5520% ( 540) 00:32:19.463 1228.800 - 1236.247: 36.1972% ( 564) 00:32:19.463 1236.247 - 1243.695: 36.8251% ( 549) 00:32:19.463 1243.695 - 1251.142: 37.4737% ( 567) 00:32:19.463 1251.142 - 1258.589: 38.1040% ( 551) 00:32:19.463 1258.589 - 1266.036: 38.7468% ( 562) 00:32:19.463 1266.036 - 1273.484: 39.3656% ( 541) 00:32:19.463 1273.484 - 1280.931: 40.0062% ( 560) 00:32:19.463 1280.931 - 1288.378: 40.6582% ( 570) 00:32:19.463 1288.378 - 1295.825: 41.2816% ( 545) 00:32:19.463 1295.825 - 1303.273: 41.9667% ( 599) 00:32:19.463 1303.273 - 1310.720: 42.5833% ( 539) 00:32:19.463 1310.720 - 1318.167: 43.2433% ( 577) 00:32:19.463 1318.167 - 1325.615: 43.8587% ( 538) 00:32:19.463 1325.615 - 1333.062: 44.5187% ( 577) 00:32:19.463 1333.062 - 1340.509: 45.1730% ( 572) 00:32:19.463 1340.509 - 1347.956: 45.8238% ( 569) 00:32:19.463 1347.956 - 1355.404: 46.4861% ( 579) 00:32:19.463 1355.404 - 1362.851: 47.1427% ( 574) 00:32:19.463 1362.851 - 1370.298: 47.7981% ( 573) 00:32:19.463 1370.298 - 1377.745: 48.4581% ( 577) 00:32:19.463 1377.745 - 1385.193: 49.1352% ( 592) 00:32:19.463 1385.193 - 1392.640: 49.7792% ( 563) 00:32:19.463 1392.640 - 1400.087: 50.4507% ( 587) 00:32:19.463 1400.087 - 1407.535: 51.0912% ( 560) 00:32:19.463 1407.535 - 1414.982: 51.7718% ( 595) 00:32:19.463 1414.982 - 1422.429: 52.3952% ( 545) 00:32:19.463 1422.429 - 1429.876: 53.0735% ( 593) 00:32:19.463 1429.876 - 1437.324: 53.7209% ( 566) 00:32:19.463 1437.324 - 1444.771: 54.3924% ( 587) 00:32:19.463 1444.771 - 1452.218: 55.0318% ( 559) 00:32:19.463 1452.218 - 1459.665: 55.7021% ( 586) 00:32:19.463 1459.665 - 1467.113: 56.3484% ( 565) 00:32:19.463 1467.113 - 1474.560: 57.0072% ( 576) 00:32:19.463 1474.560 - 1482.007: 57.6741% ( 583) 00:32:19.463 1482.007 - 1489.455: 58.3204% ( 565) 00:32:19.463 1489.455 - 1496.902: 58.9987% ( 593) 00:32:19.463 1496.902 - 1504.349: 59.6461% ( 566) 00:32:19.463 1504.349 - 1511.796: 60.2981% ( 570) 00:32:19.463 1511.796 - 1519.244: 60.9604% ( 579) 00:32:19.463 1519.244 - 1526.691: 61.6169% ( 574) 00:32:19.463 1526.691 - 1534.138: 62.2724% ( 573) 00:32:19.463 1534.138 - 1541.585: 62.9381% ( 582) 00:32:19.463 1541.585 - 1549.033: 63.5741% ( 556) 00:32:19.463 1549.033 - 1556.480: 64.2581% ( 598) 00:32:19.463 1556.480 - 1563.927: 64.8884% ( 551) 00:32:19.463 1563.927 - 1571.375: 65.5598% ( 587) 00:32:19.463 1571.375 - 1578.822: 66.1809% ( 543) 00:32:19.463 1578.822 - 1586.269: 66.8524% ( 587) 00:32:19.463 1586.269 - 1593.716: 67.4769% ( 546) 00:32:19.463 1593.716 - 1601.164: 68.1449% ( 584) 00:32:19.463 1601.164 - 1608.611: 68.7877% ( 562) 00:32:19.463 1608.611 - 1616.058: 69.4329% ( 564) 00:32:19.463 1616.058 - 1623.505: 70.0803% ( 566) 00:32:19.463 1623.505 - 1630.953: 70.7403% ( 577) 00:32:19.463 1630.953 - 1638.400: 71.3626% ( 544) 00:32:19.463 1638.400 - 1645.847: 72.0237% ( 578) 00:32:19.463 1645.847 - 1653.295: 72.6551% ( 552) 00:32:19.463 1653.295 - 1660.742: 73.3311% ( 591) 00:32:19.463 1660.742 - 1668.189: 73.9591% ( 549) 00:32:19.463 1668.189 - 1675.636: 74.6008% ( 561) 00:32:19.463 1675.636 - 1683.084: 75.2448% ( 563) 00:32:19.463 1683.084 - 1690.531: 75.8750% ( 551) 00:32:19.463 1690.531 - 1697.978: 76.5167% ( 561) 00:32:19.463 1697.978 - 1705.425: 77.1310% ( 537) 00:32:19.463 1705.425 - 1712.873: 77.7464% ( 538) 00:32:19.463 1712.873 - 1720.320: 78.3686% ( 544) 00:32:19.463 1720.320 - 1727.767: 79.0115% ( 562) 00:32:19.463 1727.767 - 1735.215: 79.6051% ( 519) 00:32:19.463 1735.215 - 1742.662: 80.2205% ( 538) 00:32:19.463 1742.662 - 1750.109: 80.8245% ( 528) 00:32:19.463 1750.109 - 1757.556: 81.4250% ( 525) 00:32:19.463 1757.556 - 1765.004: 82.0152% ( 516) 00:32:19.463 1765.004 - 1772.451: 82.5837% ( 497) 00:32:19.463 1772.451 - 1779.898: 83.1339% ( 481) 00:32:19.463 1779.898 - 1787.345: 83.6624% ( 462) 00:32:19.463 1787.345 - 1794.793: 84.1737% ( 447) 00:32:19.463 1794.793 - 1802.240: 84.6347% ( 403) 00:32:19.463 1802.240 - 1809.687: 85.0888% ( 397) 00:32:19.463 1809.687 - 1817.135: 85.4742% ( 337) 00:32:19.463 1817.135 - 1824.582: 85.8712% ( 347) 00:32:19.463 1824.582 - 1832.029: 86.2406% ( 323) 00:32:19.463 1832.029 - 1839.476: 86.5563% ( 276) 00:32:19.463 1839.476 - 1846.924: 86.8629% ( 268) 00:32:19.463 1846.924 - 1854.371: 87.1157% ( 221) 00:32:19.463 1854.371 - 1861.818: 87.3856% ( 236) 00:32:19.463 1861.818 - 1869.265: 87.6132% ( 199) 00:32:19.463 1869.265 - 1876.713: 87.8420% ( 200) 00:32:19.463 1876.713 - 1884.160: 88.0296% ( 164) 00:32:19.463 1884.160 - 1891.607: 88.2298% ( 175) 00:32:19.463 1891.607 - 1899.055: 88.3933% ( 143) 00:32:19.463 1899.055 - 1906.502: 88.5809% ( 164) 00:32:19.463 1906.502 - 1921.396: 88.8932% ( 273) 00:32:19.463 1921.396 - 1936.291: 89.1940% ( 263) 00:32:19.463 1936.291 - 1951.185: 89.4823% ( 252) 00:32:19.463 1951.185 - 1966.080: 89.7511% ( 235) 00:32:19.463 1966.080 - 1980.975: 90.0039% ( 221) 00:32:19.463 1980.975 - 1995.869: 90.2349% ( 202) 00:32:19.463 1995.869 - 2010.764: 90.4683% ( 204) 00:32:19.463 2010.764 - 2025.658: 90.6868% ( 191) 00:32:19.463 2025.658 - 2040.553: 90.8812% ( 170) 00:32:19.463 2040.553 - 2055.447: 91.0677% ( 163) 00:32:19.463 2055.447 - 2070.342: 91.2461% ( 156) 00:32:19.463 2070.342 - 2085.236: 91.4131% ( 146) 00:32:19.463 2085.236 - 2100.131: 91.5710% ( 138) 00:32:19.463 2100.131 - 2115.025: 91.7220% ( 132) 00:32:19.463 2115.025 - 2129.920: 91.8615% ( 122) 00:32:19.463 2129.920 - 2144.815: 92.0091% ( 129) 00:32:19.463 2144.815 - 2159.709: 92.1543% ( 127) 00:32:19.463 2159.709 - 2174.604: 92.2893% ( 118) 00:32:19.463 2174.604 - 2189.498: 92.4289% ( 122) 00:32:19.463 2189.498 - 2204.393: 92.5478% ( 104) 00:32:19.463 2204.393 - 2219.287: 92.6782% ( 114) 00:32:19.463 2219.287 - 2234.182: 92.8029% ( 109) 00:32:19.463 2234.182 - 2249.076: 92.9230% ( 105) 00:32:19.463 2249.076 - 2263.971: 93.0385% ( 101) 00:32:19.463 2263.971 - 2278.865: 93.1609% ( 107) 00:32:19.463 2278.865 - 2293.760: 93.2856% ( 109) 00:32:19.463 2293.760 - 2308.655: 93.3885% ( 90) 00:32:19.463 2308.655 - 2323.549: 93.5052% ( 102) 00:32:19.463 2323.549 - 2338.444: 93.6185% ( 99) 00:32:19.463 2338.444 - 2353.338: 93.7317% ( 99) 00:32:19.463 2353.338 - 2368.233: 93.8415% ( 96) 00:32:19.463 2368.233 - 2383.127: 93.9467% ( 92) 00:32:19.463 2383.127 - 2398.022: 94.0554% ( 95) 00:32:19.463 2398.022 - 2412.916: 94.1664% ( 97) 00:32:19.463 2412.916 - 2427.811: 94.2727% ( 93) 00:32:19.463 2427.811 - 2442.705: 94.3734% ( 88) 00:32:19.463 2442.705 - 2457.600: 94.4752% ( 89) 00:32:19.463 2457.600 - 2472.495: 94.5816% ( 93) 00:32:19.463 2472.495 - 2487.389: 94.6902% ( 95) 00:32:19.463 2487.389 - 2502.284: 94.8012% ( 97) 00:32:19.463 2502.284 - 2517.178: 94.9087% ( 94) 00:32:19.463 2517.178 - 2532.073: 95.0162% ( 94) 00:32:19.463 2532.073 - 2546.967: 95.1226% ( 93) 00:32:19.463 2546.967 - 2561.862: 95.2290% ( 93) 00:32:19.463 2561.862 - 2576.756: 95.3388% ( 96) 00:32:19.463 2576.756 - 2591.651: 95.4486% ( 96) 00:32:19.463 2591.651 - 2606.545: 95.5516% ( 90) 00:32:19.463 2606.545 - 2621.440: 95.6545% ( 90) 00:32:19.463 2621.440 - 2636.335: 95.7597% ( 92) 00:32:19.463 2636.335 - 2651.229: 95.8570% ( 85) 00:32:19.463 2651.229 - 2666.124: 95.9450% ( 77) 00:32:19.463 2666.124 - 2681.018: 96.0411% ( 84) 00:32:19.463 2681.018 - 2695.913: 96.1281% ( 76) 00:32:19.463 2695.913 - 2710.807: 96.2093% ( 71) 00:32:19.463 2710.807 - 2725.702: 96.2871% ( 68) 00:32:19.463 2725.702 - 2740.596: 96.3637% ( 67) 00:32:19.463 2740.596 - 2755.491: 96.4426% ( 69) 00:32:19.463 2755.491 - 2770.385: 96.5090% ( 58) 00:32:19.463 2770.385 - 2785.280: 96.5890% ( 70) 00:32:19.463 2785.280 - 2800.175: 96.6622% ( 64) 00:32:19.463 2800.175 - 2815.069: 96.7389% ( 67) 00:32:19.463 2815.069 - 2829.964: 96.8167% ( 68) 00:32:19.463 2829.964 - 2844.858: 96.8876% ( 62) 00:32:19.463 2844.858 - 2859.753: 96.9677% ( 70) 00:32:19.463 2859.753 - 2874.647: 97.0397% ( 63) 00:32:19.463 2874.647 - 2889.542: 97.1129% ( 64) 00:32:19.463 2889.542 - 2904.436: 97.1884% ( 66) 00:32:19.463 2904.436 - 2919.331: 97.2582% ( 61) 00:32:19.463 2919.331 - 2934.225: 97.3291% ( 62) 00:32:19.463 2934.225 - 2949.120: 97.4057% ( 67) 00:32:19.463 2949.120 - 2964.015: 97.4790% ( 64) 00:32:19.463 2964.015 - 2978.909: 97.5533% ( 65) 00:32:19.463 2978.909 - 2993.804: 97.6277% ( 65) 00:32:19.463 2993.804 - 3008.698: 97.7043% ( 67) 00:32:19.463 3008.698 - 3023.593: 97.7775% ( 64) 00:32:19.463 3023.593 - 3038.487: 97.8393% ( 54) 00:32:19.463 3038.487 - 3053.382: 97.9125% ( 64) 00:32:19.464 3053.382 - 3068.276: 97.9822% ( 61) 00:32:19.464 3068.276 - 3083.171: 98.0543% ( 63) 00:32:19.464 3083.171 - 3098.065: 98.1275% ( 64) 00:32:19.464 3098.065 - 3112.960: 98.1961% ( 60) 00:32:19.464 3112.960 - 3127.855: 98.2694% ( 64) 00:32:19.464 3127.855 - 3142.749: 98.3437% ( 65) 00:32:19.464 3142.749 - 3157.644: 98.4135% ( 61) 00:32:19.464 3157.644 - 3172.538: 98.4890% ( 66) 00:32:19.464 3172.538 - 3187.433: 98.5576% ( 60) 00:32:19.464 3187.433 - 3202.327: 98.6297% ( 63) 00:32:19.464 3202.327 - 3217.222: 98.7017% ( 63) 00:32:19.464 3217.222 - 3232.116: 98.7726% ( 62) 00:32:19.464 3232.116 - 3247.011: 98.8390% ( 58) 00:32:19.464 3247.011 - 3261.905: 98.9111% ( 63) 00:32:19.464 3261.905 - 3276.800: 98.9797% ( 60) 00:32:19.464 3276.800 - 3291.695: 99.0529% ( 64) 00:32:19.464 3291.695 - 3306.589: 99.1169% ( 56) 00:32:19.464 3306.589 - 3321.484: 99.1833% ( 58) 00:32:19.464 3321.484 - 3336.378: 99.2451% ( 54) 00:32:19.464 3336.378 - 3351.273: 99.3034% ( 51) 00:32:19.464 3351.273 - 3366.167: 99.3549% ( 45) 00:32:19.464 3366.167 - 3381.062: 99.4086% ( 47) 00:32:19.464 3381.062 - 3395.956: 99.4601% ( 45) 00:32:19.464 3395.956 - 3410.851: 99.5013% ( 36) 00:32:19.464 3410.851 - 3425.745: 99.5459% ( 39) 00:32:19.464 3425.745 - 3440.640: 99.5814% ( 31) 00:32:19.464 3440.640 - 3455.535: 99.6180% ( 32) 00:32:19.464 3455.535 - 3470.429: 99.6523% ( 30) 00:32:19.464 3470.429 - 3485.324: 99.6809% ( 25) 00:32:19.464 3485.324 - 3500.218: 99.7095% ( 25) 00:32:19.464 3500.218 - 3515.113: 99.7312% ( 19) 00:32:19.464 3515.113 - 3530.007: 99.7518% ( 18) 00:32:19.464 3530.007 - 3544.902: 99.7701% ( 16) 00:32:19.464 3544.902 - 3559.796: 99.7884% ( 16) 00:32:19.464 3559.796 - 3574.691: 99.7975% ( 8) 00:32:19.464 3574.691 - 3589.585: 99.8113% ( 12) 00:32:19.464 3589.585 - 3604.480: 99.8204% ( 8) 00:32:19.464 3604.480 - 3619.375: 99.8284% ( 7) 00:32:19.464 3619.375 - 3634.269: 99.8341% ( 5) 00:32:19.464 3634.269 - 3649.164: 99.8387% ( 4) 00:32:19.464 3649.164 - 3664.058: 99.8410% ( 2) 00:32:19.464 3664.058 - 3678.953: 99.8456% ( 4) 00:32:19.464 3678.953 - 3693.847: 99.8502% ( 4) 00:32:19.464 3693.847 - 3708.742: 99.8547% ( 4) 00:32:19.464 3708.742 - 3723.636: 99.8582% ( 3) 00:32:19.464 3723.636 - 3738.531: 99.8627% ( 4) 00:32:19.464 3738.531 - 3753.425: 99.8662% ( 3) 00:32:19.464 3753.425 - 3768.320: 99.8707% ( 4) 00:32:19.464 3768.320 - 3783.215: 99.8730% ( 2) 00:32:19.464 3783.215 - 3798.109: 99.8753% ( 2) 00:32:19.464 3798.109 - 3813.004: 99.8799% ( 4) 00:32:19.464 3813.004 - 3842.793: 99.8856% ( 5) 00:32:19.464 3842.793 - 3872.582: 99.8890% ( 3) 00:32:19.464 3872.582 - 3902.371: 99.8913% ( 2) 00:32:19.464 3902.371 - 3932.160: 99.8948% ( 3) 00:32:19.464 3932.160 - 3961.949: 99.8982% ( 3) 00:32:19.464 3961.949 - 3991.738: 99.9016% ( 3) 00:32:19.464 3991.738 - 4021.527: 99.9039% ( 2) 00:32:19.464 4021.527 - 4051.316: 99.9062% ( 2) 00:32:19.464 4051.316 - 4081.105: 99.9073% ( 1) 00:32:19.464 4081.105 - 4110.895: 99.9085% ( 1) 00:32:19.464 4110.895 - 4140.684: 99.9096% ( 1) 00:32:19.464 4140.684 - 4170.473: 99.9108% ( 1) 00:32:19.464 4170.473 - 4200.262: 99.9119% ( 1) 00:32:19.464 4200.262 - 4230.051: 99.9131% ( 1) 00:32:19.464 4230.051 - 4259.840: 99.9142% ( 1) 00:32:19.464 4259.840 - 4289.629: 99.9154% ( 1) 00:32:19.464 4289.629 - 4319.418: 99.9165% ( 1) 00:32:19.464 4319.418 - 4349.207: 99.9176% ( 1) 00:32:19.464 4349.207 - 4378.996: 99.9188% ( 1) 00:32:19.464 4378.996 - 4408.785: 99.9199% ( 1) 00:32:19.464 4408.785 - 4438.575: 99.9211% ( 1) 00:32:19.464 4438.575 - 4468.364: 99.9222% ( 1) 00:32:19.464 4468.364 - 4498.153: 99.9234% ( 1) 00:32:19.464 4498.153 - 4527.942: 99.9245% ( 1) 00:32:19.464 4527.942 - 4557.731: 99.9256% ( 1) 00:32:19.464 4557.731 - 4587.520: 99.9268% ( 1) 00:32:19.464 4617.309 - 4647.098: 99.9279% ( 1) 00:32:19.464 4647.098 - 4676.887: 99.9291% ( 1) 00:32:19.464 4706.676 - 4736.465: 99.9302% ( 1) 00:32:19.464 4736.465 - 4766.255: 99.9314% ( 1) 00:32:19.464 4766.255 - 4796.044: 99.9325% ( 1) 00:32:19.464 4796.044 - 4825.833: 99.9337% ( 1) 00:32:19.464 4825.833 - 4855.622: 99.9348% ( 1) 00:32:19.464 4855.622 - 4885.411: 99.9359% ( 1) 00:32:19.464 4885.411 - 4915.200: 99.9371% ( 1) 00:32:19.464 4944.989 - 4974.778: 99.9382% ( 1) 00:32:19.464 4974.778 - 5004.567: 99.9394% ( 1) 00:32:19.464 5004.567 - 5034.356: 99.9405% ( 1) 00:32:19.464 5034.356 - 5064.145: 99.9417% ( 1) 00:32:19.464 5064.145 - 5093.935: 99.9428% ( 1) 00:32:19.464 5093.935 - 5123.724: 99.9440% ( 1) 00:32:19.464 5123.724 - 5153.513: 99.9451% ( 1) 00:32:19.464 5153.513 - 5183.302: 99.9462% ( 1) 00:32:19.464 5183.302 - 5213.091: 99.9474% ( 1) 00:32:19.464 5213.091 - 5242.880: 99.9485% ( 1) 00:32:19.464 5242.880 - 5272.669: 99.9497% ( 1) 00:32:19.464 5272.669 - 5302.458: 99.9508% ( 1) 00:32:19.464 5332.247 - 5362.036: 99.9520% ( 1) 00:32:19.464 5362.036 - 5391.825: 99.9531% ( 1) 00:32:19.464 5391.825 - 5421.615: 99.9542% ( 1) 00:32:19.464 5421.615 - 5451.404: 99.9554% ( 1) 00:32:19.464 5451.404 - 5481.193: 99.9565% ( 1) 00:32:19.464 5481.193 - 5510.982: 99.9577% ( 1) 00:32:19.464 5510.982 - 5540.771: 99.9588% ( 1) 00:32:19.464 5540.771 - 5570.560: 99.9600% ( 1) 00:32:19.464 5570.560 - 5600.349: 99.9611% ( 1) 00:32:19.464 5600.349 - 5630.138: 99.9623% ( 1) 00:32:19.464 5630.138 - 5659.927: 99.9634% ( 1) 00:32:19.464 5659.927 - 5689.716: 99.9645% ( 1) 00:32:19.464 5689.716 - 5719.505: 99.9657% ( 1) 00:32:19.464 5749.295 - 5779.084: 99.9668% ( 1) 00:32:19.464 5779.084 - 5808.873: 99.9680% ( 1) 00:32:19.464 5808.873 - 5838.662: 99.9691% ( 1) 00:32:19.464 5838.662 - 5868.451: 99.9703% ( 1) 00:32:19.464 5868.451 - 5898.240: 99.9714% ( 1) 00:32:19.464 5898.240 - 5928.029: 99.9725% ( 1) 00:32:19.464 5928.029 - 5957.818: 99.9737% ( 1) 00:32:19.464 5957.818 - 5987.607: 99.9748% ( 1) 00:32:19.464 5987.607 - 6017.396: 99.9760% ( 1) 00:32:19.464 6017.396 - 6047.185: 99.9771% ( 1) 00:32:19.464 6047.185 - 6076.975: 99.9783% ( 1) 00:32:19.464 6076.975 - 6106.764: 99.9794% ( 1) 00:32:19.464 6106.764 - 6136.553: 99.9806% ( 1) 00:32:19.464 6136.553 - 6166.342: 99.9817% ( 1) 00:32:19.464 6166.342 - 6196.131: 99.9828% ( 1) 00:32:19.464 6196.131 - 6225.920: 99.9840% ( 1) 00:32:19.464 6225.920 - 6255.709: 99.9851% ( 1) 00:32:19.464 6255.709 - 6285.498: 99.9863% ( 1) 00:32:19.464 6285.498 - 6315.287: 99.9874% ( 1) 00:32:19.464 6315.287 - 6345.076: 99.9886% ( 1) 00:32:19.464 6345.076 - 6374.865: 99.9897% ( 1) 00:32:19.464 6374.865 - 6404.655: 99.9908% ( 1) 00:32:19.464 6434.444 - 6464.233: 99.9931% ( 2) 00:32:19.464 6494.022 - 6523.811: 99.9943% ( 1) 00:32:19.464 6523.811 - 6553.600: 99.9954% ( 1) 00:32:19.464 6553.600 - 6583.389: 99.9966% ( 1) 00:32:19.464 6583.389 - 6613.178: 99.9977% ( 1) 00:32:19.464 6613.178 - 6642.967: 99.9989% ( 1) 00:32:19.464 6702.545 - 6732.335: 100.0000% ( 1) 00:32:19.464 00:32:19.464 00:51:52 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:32:20.852 Initializing NVMe Controllers 00:32:20.852 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:20.852 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:20.852 Initialization complete. Launching workers. 00:32:20.852 ======================================================== 00:32:20.852 Latency(us) 00:32:20.852 Device Information : IOPS MiB/s Average min max 00:32:20.852 PCIE (0000:00:10.0) NSID 1 from core 0: 57596.94 674.96 2221.34 736.64 6320.48 00:32:20.852 ======================================================== 00:32:20.852 Total : 57596.94 674.96 2221.34 736.64 6320.48 00:32:20.852 00:32:20.852 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:20.852 ================================================================================= 00:32:20.852 1.00000% : 1124.538us 00:32:20.852 10.00000% : 1377.745us 00:32:20.852 25.00000% : 1601.164us 00:32:20.852 50.00000% : 2025.658us 00:32:20.852 75.00000% : 2919.331us 00:32:20.852 90.00000% : 3172.538us 00:32:20.852 95.00000% : 3485.324us 00:32:20.852 98.00000% : 3798.109us 00:32:20.852 99.00000% : 3991.738us 00:32:20.852 99.50000% : 4170.473us 00:32:20.852 99.90000% : 5391.825us 00:32:20.852 99.99000% : 6196.131us 00:32:20.852 99.99900% : 6345.076us 00:32:20.852 99.99990% : 6345.076us 00:32:20.852 99.99999% : 6345.076us 00:32:20.852 00:32:20.852 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:32:20.852 ============================================================================== 00:32:20.852 Range in us Cumulative IO count 00:32:20.852 733.556 - 737.280: 0.0017% ( 1) 00:32:20.852 755.898 - 759.622: 0.0035% ( 1) 00:32:20.852 778.240 - 781.964: 0.0052% ( 1) 00:32:20.852 796.858 - 800.582: 0.0069% ( 1) 00:32:20.852 800.582 - 804.305: 0.0087% ( 1) 00:32:20.852 830.371 - 834.095: 0.0104% ( 1) 00:32:20.852 845.265 - 848.989: 0.0122% ( 1) 00:32:20.852 856.436 - 860.160: 0.0139% ( 1) 00:32:20.852 863.884 - 867.607: 0.0156% ( 1) 00:32:20.852 867.607 - 871.331: 0.0174% ( 1) 00:32:20.852 882.502 - 886.225: 0.0191% ( 1) 00:32:20.852 897.396 - 901.120: 0.0226% ( 2) 00:32:20.852 901.120 - 904.844: 0.0243% ( 1) 00:32:20.852 908.567 - 912.291: 0.0278% ( 2) 00:32:20.852 912.291 - 916.015: 0.0365% ( 5) 00:32:20.852 919.738 - 923.462: 0.0399% ( 2) 00:32:20.852 923.462 - 927.185: 0.0434% ( 2) 00:32:20.852 930.909 - 934.633: 0.0451% ( 1) 00:32:20.852 934.633 - 938.356: 0.0486% ( 2) 00:32:20.852 938.356 - 942.080: 0.0538% ( 3) 00:32:20.852 942.080 - 945.804: 0.0608% ( 4) 00:32:20.852 945.804 - 949.527: 0.0625% ( 1) 00:32:20.852 949.527 - 953.251: 0.0642% ( 1) 00:32:20.852 953.251 - 960.698: 0.0694% ( 3) 00:32:20.852 960.698 - 968.145: 0.0781% ( 5) 00:32:20.852 968.145 - 975.593: 0.0851% ( 4) 00:32:20.852 975.593 - 983.040: 0.0989% ( 8) 00:32:20.852 983.040 - 990.487: 0.1146% ( 9) 00:32:20.852 990.487 - 997.935: 0.1371% ( 13) 00:32:20.852 997.935 - 1005.382: 0.1545% ( 10) 00:32:20.852 1005.382 - 1012.829: 0.1927% ( 22) 00:32:20.852 1012.829 - 1020.276: 0.2135% ( 12) 00:32:20.852 1020.276 - 1027.724: 0.2396% ( 15) 00:32:20.852 1027.724 - 1035.171: 0.2656% ( 15) 00:32:20.852 1035.171 - 1042.618: 0.2899% ( 14) 00:32:20.852 1042.618 - 1050.065: 0.3212% ( 18) 00:32:20.852 1050.065 - 1057.513: 0.3628% ( 24) 00:32:20.852 1057.513 - 1064.960: 0.4045% ( 24) 00:32:20.852 1064.960 - 1072.407: 0.4635% ( 34) 00:32:20.852 1072.407 - 1079.855: 0.5277% ( 37) 00:32:20.852 1079.855 - 1087.302: 0.5902% ( 36) 00:32:20.852 1087.302 - 1094.749: 0.6614% ( 41) 00:32:20.852 1094.749 - 1102.196: 0.7395% ( 45) 00:32:20.852 1102.196 - 1109.644: 0.8176% ( 45) 00:32:20.852 1109.644 - 1117.091: 0.9270% ( 63) 00:32:20.852 1117.091 - 1124.538: 1.0086% ( 47) 00:32:20.852 1124.538 - 1131.985: 1.1232% ( 66) 00:32:20.852 1131.985 - 1139.433: 1.2256% ( 59) 00:32:20.853 1139.433 - 1146.880: 1.3766% ( 87) 00:32:20.853 1146.880 - 1154.327: 1.4808% ( 60) 00:32:20.853 1154.327 - 1161.775: 1.6231% ( 82) 00:32:20.853 1161.775 - 1169.222: 1.7637% ( 81) 00:32:20.853 1169.222 - 1176.669: 1.9408% ( 102) 00:32:20.853 1176.669 - 1184.116: 2.1092% ( 97) 00:32:20.853 1184.116 - 1191.564: 2.2949% ( 107) 00:32:20.853 1191.564 - 1199.011: 2.4668% ( 99) 00:32:20.853 1199.011 - 1206.458: 2.6525% ( 107) 00:32:20.853 1206.458 - 1213.905: 2.8626% ( 121) 00:32:20.853 1213.905 - 1221.353: 3.0848% ( 128) 00:32:20.853 1221.353 - 1228.800: 3.3226% ( 137) 00:32:20.853 1228.800 - 1236.247: 3.5605% ( 137) 00:32:20.853 1236.247 - 1243.695: 3.7965% ( 136) 00:32:20.853 1243.695 - 1251.142: 4.0569% ( 150) 00:32:20.853 1251.142 - 1258.589: 4.3156% ( 149) 00:32:20.853 1258.589 - 1266.036: 4.5934% ( 160) 00:32:20.853 1266.036 - 1273.484: 4.8780% ( 164) 00:32:20.853 1273.484 - 1280.931: 5.2061% ( 189) 00:32:20.853 1280.931 - 1288.378: 5.5377% ( 191) 00:32:20.853 1288.378 - 1295.825: 5.8658% ( 189) 00:32:20.853 1295.825 - 1303.273: 6.1922% ( 188) 00:32:20.853 1303.273 - 1310.720: 6.5637% ( 214) 00:32:20.853 1310.720 - 1318.167: 6.9907% ( 246) 00:32:20.853 1318.167 - 1325.615: 7.3518% ( 208) 00:32:20.853 1325.615 - 1333.062: 7.7493% ( 229) 00:32:20.853 1333.062 - 1340.509: 8.1746% ( 245) 00:32:20.853 1340.509 - 1347.956: 8.6017% ( 246) 00:32:20.853 1347.956 - 1355.404: 9.0982% ( 286) 00:32:20.853 1355.404 - 1362.851: 9.5079% ( 236) 00:32:20.853 1362.851 - 1370.298: 9.9800% ( 272) 00:32:20.853 1370.298 - 1377.745: 10.4713% ( 283) 00:32:20.853 1377.745 - 1385.193: 10.9192% ( 258) 00:32:20.853 1385.193 - 1392.640: 11.3862% ( 269) 00:32:20.853 1392.640 - 1400.087: 11.8653% ( 276) 00:32:20.853 1400.087 - 1407.535: 12.3132% ( 258) 00:32:20.853 1407.535 - 1414.982: 12.7697% ( 263) 00:32:20.853 1414.982 - 1422.429: 13.3009% ( 306) 00:32:20.853 1422.429 - 1429.876: 13.8096% ( 293) 00:32:20.853 1429.876 - 1437.324: 14.3182% ( 293) 00:32:20.853 1437.324 - 1444.771: 14.8320% ( 296) 00:32:20.853 1444.771 - 1452.218: 15.3580% ( 303) 00:32:20.853 1452.218 - 1459.665: 15.8511% ( 284) 00:32:20.853 1459.665 - 1467.113: 16.3441% ( 284) 00:32:20.853 1467.113 - 1474.560: 16.8024% ( 264) 00:32:20.853 1474.560 - 1482.007: 17.2850% ( 278) 00:32:20.853 1482.007 - 1489.455: 17.8127% ( 304) 00:32:20.853 1489.455 - 1496.902: 18.2901% ( 275) 00:32:20.853 1496.902 - 1504.349: 18.8230% ( 307) 00:32:20.853 1504.349 - 1511.796: 19.3317% ( 293) 00:32:20.853 1511.796 - 1519.244: 19.8056% ( 273) 00:32:20.853 1519.244 - 1526.691: 20.3107% ( 291) 00:32:20.853 1526.691 - 1534.138: 20.8107% ( 288) 00:32:20.853 1534.138 - 1541.585: 21.3245% ( 296) 00:32:20.853 1541.585 - 1549.033: 21.8384% ( 296) 00:32:20.853 1549.033 - 1556.480: 22.3401% ( 289) 00:32:20.853 1556.480 - 1563.927: 22.8817% ( 312) 00:32:20.853 1563.927 - 1571.375: 23.3869% ( 291) 00:32:20.853 1571.375 - 1578.822: 23.9163% ( 305) 00:32:20.853 1578.822 - 1586.269: 24.3850% ( 270) 00:32:20.853 1586.269 - 1593.716: 24.8468% ( 266) 00:32:20.853 1593.716 - 1601.164: 25.3589% ( 295) 00:32:20.853 1601.164 - 1608.611: 25.8311% ( 272) 00:32:20.853 1608.611 - 1616.058: 26.3554% ( 302) 00:32:20.853 1616.058 - 1623.505: 26.8345% ( 276) 00:32:20.853 1623.505 - 1630.953: 27.2928% ( 264) 00:32:20.853 1630.953 - 1638.400: 27.7771% ( 279) 00:32:20.853 1638.400 - 1645.847: 28.2354% ( 264) 00:32:20.853 1645.847 - 1653.295: 28.6867% ( 260) 00:32:20.853 1653.295 - 1660.742: 29.1346% ( 258) 00:32:20.853 1660.742 - 1668.189: 29.5825% ( 258) 00:32:20.853 1668.189 - 1675.636: 30.0495% ( 269) 00:32:20.853 1675.636 - 1683.084: 30.5251% ( 274) 00:32:20.853 1683.084 - 1690.531: 30.9869% ( 266) 00:32:20.853 1690.531 - 1697.978: 31.4869% ( 288) 00:32:20.853 1697.978 - 1705.425: 31.9295% ( 255) 00:32:20.853 1705.425 - 1712.873: 32.4312% ( 289) 00:32:20.853 1712.873 - 1720.320: 32.8774% ( 257) 00:32:20.853 1720.320 - 1727.767: 33.3287% ( 260) 00:32:20.853 1727.767 - 1735.215: 33.8564% ( 304) 00:32:20.853 1735.215 - 1742.662: 34.3442% ( 281) 00:32:20.853 1742.662 - 1750.109: 34.8043% ( 265) 00:32:20.853 1750.109 - 1757.556: 35.2990% ( 285) 00:32:20.853 1757.556 - 1765.004: 35.8111% ( 295) 00:32:20.853 1765.004 - 1772.451: 36.2677% ( 263) 00:32:20.853 1772.451 - 1779.898: 36.7485% ( 277) 00:32:20.853 1779.898 - 1787.345: 37.2294% ( 277) 00:32:20.853 1787.345 - 1794.793: 37.7346% ( 291) 00:32:20.853 1794.793 - 1802.240: 38.2241% ( 282) 00:32:20.853 1802.240 - 1809.687: 38.7814% ( 321) 00:32:20.853 1809.687 - 1817.135: 39.2344% ( 261) 00:32:20.853 1817.135 - 1824.582: 39.6788% ( 256) 00:32:20.853 1824.582 - 1832.029: 40.1614% ( 278) 00:32:20.853 1832.029 - 1839.476: 40.5763% ( 239) 00:32:20.853 1839.476 - 1846.924: 41.0659% ( 282) 00:32:20.853 1846.924 - 1854.371: 41.5276% ( 266) 00:32:20.853 1854.371 - 1861.818: 41.9582% ( 248) 00:32:20.853 1861.818 - 1869.265: 42.4112% ( 261) 00:32:20.853 1869.265 - 1876.713: 42.8348% ( 244) 00:32:20.853 1876.713 - 1884.160: 43.2862% ( 260) 00:32:20.853 1884.160 - 1891.607: 43.7028% ( 240) 00:32:20.853 1891.607 - 1899.055: 44.1160% ( 238) 00:32:20.853 1899.055 - 1906.502: 44.5361% ( 242) 00:32:20.853 1906.502 - 1921.396: 45.4353% ( 518) 00:32:20.853 1921.396 - 1936.291: 46.2460% ( 467) 00:32:20.853 1936.291 - 1951.185: 47.0046% ( 437) 00:32:20.853 1951.185 - 1966.080: 47.7858% ( 450) 00:32:20.853 1966.080 - 1980.975: 48.5253% ( 426) 00:32:20.853 1980.975 - 1995.869: 49.1989% ( 388) 00:32:20.853 1995.869 - 2010.764: 49.8811% ( 393) 00:32:20.853 2010.764 - 2025.658: 50.5494% ( 385) 00:32:20.853 2025.658 - 2040.553: 51.1674% ( 356) 00:32:20.853 2040.553 - 2055.447: 51.7785% ( 352) 00:32:20.853 2055.447 - 2070.342: 52.3861% ( 350) 00:32:20.853 2070.342 - 2085.236: 52.9433% ( 321) 00:32:20.853 2085.236 - 2100.131: 53.4954% ( 318) 00:32:20.853 2100.131 - 2115.025: 54.0283% ( 307) 00:32:20.853 2115.025 - 2129.920: 54.5751% ( 315) 00:32:20.853 2129.920 - 2144.815: 55.0699% ( 285) 00:32:20.853 2144.815 - 2159.709: 55.5403% ( 271) 00:32:20.853 2159.709 - 2174.604: 55.9917% ( 260) 00:32:20.853 2174.604 - 2189.498: 56.4534% ( 266) 00:32:20.853 2189.498 - 2204.393: 56.8787% ( 245) 00:32:20.853 2204.393 - 2219.287: 57.2815% ( 232) 00:32:20.853 2219.287 - 2234.182: 57.6808% ( 230) 00:32:20.853 2234.182 - 2249.076: 58.0384% ( 206) 00:32:20.853 2249.076 - 2263.971: 58.4237% ( 222) 00:32:20.853 2263.971 - 2278.865: 58.7536% ( 190) 00:32:20.853 2278.865 - 2293.760: 59.1181% ( 210) 00:32:20.853 2293.760 - 2308.655: 59.4566% ( 195) 00:32:20.853 2308.655 - 2323.549: 59.7847% ( 189) 00:32:20.853 2323.549 - 2338.444: 60.1024% ( 183) 00:32:20.853 2338.444 - 2353.338: 60.4166% ( 181) 00:32:20.853 2353.338 - 2368.233: 60.7829% ( 211) 00:32:20.853 2368.233 - 2383.127: 61.1475% ( 210) 00:32:20.854 2383.127 - 2398.022: 61.5016% ( 204) 00:32:20.854 2398.022 - 2412.916: 61.8540% ( 203) 00:32:20.854 2412.916 - 2427.811: 62.2324% ( 218) 00:32:20.854 2427.811 - 2442.705: 62.6057% ( 215) 00:32:20.854 2442.705 - 2457.600: 62.9529% ( 200) 00:32:20.854 2457.600 - 2472.495: 63.3018% ( 201) 00:32:20.854 2472.495 - 2487.389: 63.6959% ( 227) 00:32:20.854 2487.389 - 2502.284: 64.0795% ( 221) 00:32:20.854 2502.284 - 2517.178: 64.4441% ( 210) 00:32:20.854 2517.178 - 2532.073: 64.8364% ( 226) 00:32:20.854 2532.073 - 2546.967: 65.2426% ( 234) 00:32:20.854 2546.967 - 2561.862: 65.6315% ( 224) 00:32:20.854 2561.862 - 2576.756: 66.0741% ( 255) 00:32:20.854 2576.756 - 2591.651: 66.5012% ( 246) 00:32:20.854 2591.651 - 2606.545: 66.8918% ( 225) 00:32:20.854 2606.545 - 2621.440: 67.3258% ( 250) 00:32:20.854 2621.440 - 2636.335: 67.7337% ( 235) 00:32:20.854 2636.335 - 2651.229: 68.1312% ( 229) 00:32:20.854 2651.229 - 2666.124: 68.5444% ( 238) 00:32:20.854 2666.124 - 2681.018: 69.0131% ( 270) 00:32:20.854 2681.018 - 2695.913: 69.3950% ( 220) 00:32:20.854 2695.913 - 2710.807: 69.8533% ( 264) 00:32:20.854 2710.807 - 2725.702: 70.2717% ( 241) 00:32:20.854 2725.702 - 2740.596: 70.6900% ( 241) 00:32:20.854 2740.596 - 2755.491: 71.0772% ( 223) 00:32:20.854 2755.491 - 2770.385: 71.4886% ( 237) 00:32:20.854 2770.385 - 2785.280: 71.8531% ( 210) 00:32:20.854 2785.280 - 2800.175: 72.2628% ( 236) 00:32:20.854 2800.175 - 2815.069: 72.6065% ( 198) 00:32:20.854 2815.069 - 2829.964: 72.9728% ( 211) 00:32:20.854 2829.964 - 2844.858: 73.3096% ( 194) 00:32:20.854 2844.858 - 2859.753: 73.6551% ( 199) 00:32:20.854 2859.753 - 2874.647: 74.0404% ( 222) 00:32:20.854 2874.647 - 2889.542: 74.4258% ( 222) 00:32:20.854 2889.542 - 2904.436: 74.7539% ( 189) 00:32:20.854 2904.436 - 2919.331: 75.1428% ( 224) 00:32:20.854 2919.331 - 2934.225: 75.5403% ( 229) 00:32:20.854 2934.225 - 2949.120: 75.9778% ( 252) 00:32:20.854 2949.120 - 2964.015: 76.5211% ( 313) 00:32:20.854 2964.015 - 2978.909: 77.1443% ( 359) 00:32:20.854 2978.909 - 2993.804: 77.8908% ( 430) 00:32:20.854 2993.804 - 3008.698: 78.7397% ( 489) 00:32:20.854 3008.698 - 3023.593: 79.5400% ( 461) 00:32:20.854 3023.593 - 3038.487: 80.4392% ( 518) 00:32:20.854 3038.487 - 3053.382: 81.5103% ( 617) 00:32:20.854 3053.382 - 3068.276: 82.6682% ( 667) 00:32:20.854 3068.276 - 3083.171: 83.8903% ( 704) 00:32:20.854 3083.171 - 3098.065: 85.1818% ( 744) 00:32:20.854 3098.065 - 3112.960: 86.3692% ( 684) 00:32:20.854 3112.960 - 3127.855: 87.4560% ( 626) 00:32:20.854 3127.855 - 3142.749: 88.4420% ( 568) 00:32:20.854 3142.749 - 3157.644: 89.2822% ( 484) 00:32:20.854 3157.644 - 3172.538: 90.0026% ( 415) 00:32:20.854 3172.538 - 3187.433: 90.5477% ( 314) 00:32:20.854 3187.433 - 3202.327: 91.0494% ( 289) 00:32:20.854 3202.327 - 3217.222: 91.4608% ( 237) 00:32:20.854 3217.222 - 3232.116: 91.8393% ( 218) 00:32:20.854 3232.116 - 3247.011: 92.1500% ( 179) 00:32:20.854 3247.011 - 3261.905: 92.3930% ( 140) 00:32:20.854 3261.905 - 3276.800: 92.6013% ( 120) 00:32:20.854 3276.800 - 3291.695: 92.7801% ( 103) 00:32:20.854 3291.695 - 3306.589: 92.9902% ( 121) 00:32:20.854 3306.589 - 3321.484: 93.1325% ( 82) 00:32:20.854 3321.484 - 3336.378: 93.3096% ( 102) 00:32:20.854 3336.378 - 3351.273: 93.4884% ( 103) 00:32:20.854 3351.273 - 3366.167: 93.6620% ( 100) 00:32:20.854 3366.167 - 3381.062: 93.8391% ( 102) 00:32:20.854 3381.062 - 3395.956: 94.0075% ( 97) 00:32:20.854 3395.956 - 3410.851: 94.1845% ( 102) 00:32:20.854 3410.851 - 3425.745: 94.3460% ( 93) 00:32:20.854 3425.745 - 3440.640: 94.5213% ( 101) 00:32:20.854 3440.640 - 3455.535: 94.6914% ( 98) 00:32:20.854 3455.535 - 3470.429: 94.8511% ( 92) 00:32:20.854 3470.429 - 3485.324: 95.0022% ( 87) 00:32:20.854 3485.324 - 3500.218: 95.1723% ( 98) 00:32:20.854 3500.218 - 3515.113: 95.3077% ( 78) 00:32:20.854 3515.113 - 3530.007: 95.4709% ( 94) 00:32:20.854 3530.007 - 3544.902: 95.6427% ( 99) 00:32:20.854 3544.902 - 3559.796: 95.7799% ( 79) 00:32:20.854 3559.796 - 3574.691: 95.9066% ( 73) 00:32:20.854 3574.691 - 3589.585: 96.0889% ( 105) 00:32:20.854 3589.585 - 3604.480: 96.2573% ( 97) 00:32:20.854 3604.480 - 3619.375: 96.3788% ( 70) 00:32:20.854 3619.375 - 3634.269: 96.5333% ( 89) 00:32:20.854 3634.269 - 3649.164: 96.6878% ( 89) 00:32:20.854 3649.164 - 3664.058: 96.8215% ( 77) 00:32:20.854 3664.058 - 3678.953: 96.9777% ( 90) 00:32:20.854 3678.953 - 3693.847: 97.1079% ( 75) 00:32:20.854 3693.847 - 3708.742: 97.2433% ( 78) 00:32:20.854 3708.742 - 3723.636: 97.3891% ( 84) 00:32:20.854 3723.636 - 3738.531: 97.5037% ( 66) 00:32:20.854 3738.531 - 3753.425: 97.6408% ( 79) 00:32:20.854 3753.425 - 3768.320: 97.7762% ( 78) 00:32:20.854 3768.320 - 3783.215: 97.8804% ( 60) 00:32:20.854 3783.215 - 3798.109: 98.0036% ( 71) 00:32:20.854 3798.109 - 3813.004: 98.1182% ( 66) 00:32:20.854 3813.004 - 3842.793: 98.3352% ( 125) 00:32:20.854 3842.793 - 3872.582: 98.5158% ( 104) 00:32:20.854 3872.582 - 3902.371: 98.6772% ( 93) 00:32:20.854 3902.371 - 3932.160: 98.8300% ( 88) 00:32:20.854 3932.160 - 3961.949: 98.9428% ( 65) 00:32:20.854 3961.949 - 3991.738: 99.0365% ( 54) 00:32:20.854 3991.738 - 4021.527: 99.1320% ( 55) 00:32:20.854 4021.527 - 4051.316: 99.2240% ( 53) 00:32:20.854 4051.316 - 4081.105: 99.3039% ( 46) 00:32:20.854 4081.105 - 4110.895: 99.3803% ( 44) 00:32:20.854 4110.895 - 4140.684: 99.4428% ( 36) 00:32:20.854 4140.684 - 4170.473: 99.5018% ( 34) 00:32:20.854 4170.473 - 4200.262: 99.5556% ( 31) 00:32:20.854 4200.262 - 4230.051: 99.5903% ( 20) 00:32:20.854 4230.051 - 4259.840: 99.6268% ( 21) 00:32:20.854 4259.840 - 4289.629: 99.6459% ( 11) 00:32:20.854 4289.629 - 4319.418: 99.6754% ( 17) 00:32:20.854 4319.418 - 4349.207: 99.6910% ( 9) 00:32:20.854 4349.207 - 4378.996: 99.7170% ( 15) 00:32:20.854 4378.996 - 4408.785: 99.7361% ( 11) 00:32:20.854 4408.785 - 4438.575: 99.7500% ( 8) 00:32:20.854 4438.575 - 4468.364: 99.7639% ( 8) 00:32:20.854 4468.364 - 4498.153: 99.7743% ( 6) 00:32:20.854 4498.153 - 4527.942: 99.7795% ( 3) 00:32:20.854 4527.942 - 4557.731: 99.7865% ( 4) 00:32:20.854 4557.731 - 4587.520: 99.7952% ( 5) 00:32:20.854 4587.520 - 4617.309: 99.8056% ( 6) 00:32:20.854 4617.309 - 4647.098: 99.8125% ( 4) 00:32:20.854 4647.098 - 4676.887: 99.8177% ( 3) 00:32:20.854 4676.887 - 4706.676: 99.8212% ( 2) 00:32:20.854 4706.676 - 4736.465: 99.8247% ( 2) 00:32:20.854 4736.465 - 4766.255: 99.8264% ( 1) 00:32:20.854 4766.255 - 4796.044: 99.8299% ( 2) 00:32:20.854 4796.044 - 4825.833: 99.8333% ( 2) 00:32:20.854 4825.833 - 4855.622: 99.8368% ( 2) 00:32:20.854 4855.622 - 4885.411: 99.8403% ( 2) 00:32:20.854 4885.411 - 4915.200: 99.8438% ( 2) 00:32:20.854 4915.200 - 4944.989: 99.8455% ( 1) 00:32:20.854 4944.989 - 4974.778: 99.8507% ( 3) 00:32:20.854 4974.778 - 5004.567: 99.8542% ( 2) 00:32:20.854 5004.567 - 5034.356: 99.8577% ( 2) 00:32:20.854 5034.356 - 5064.145: 99.8594% ( 1) 00:32:20.854 5064.145 - 5093.935: 99.8629% ( 2) 00:32:20.854 5093.935 - 5123.724: 99.8681% ( 3) 00:32:20.854 5123.724 - 5153.513: 99.8715% ( 2) 00:32:20.854 5153.513 - 5183.302: 99.8750% ( 2) 00:32:20.854 5183.302 - 5213.091: 99.8785% ( 2) 00:32:20.854 5213.091 - 5242.880: 99.8820% ( 2) 00:32:20.854 5242.880 - 5272.669: 99.8854% ( 2) 00:32:20.854 5272.669 - 5302.458: 99.8889% ( 2) 00:32:20.854 5302.458 - 5332.247: 99.8924% ( 2) 00:32:20.854 5332.247 - 5362.036: 99.8976% ( 3) 00:32:20.854 5362.036 - 5391.825: 99.9011% ( 2) 00:32:20.854 5391.825 - 5421.615: 99.9045% ( 2) 00:32:20.854 5421.615 - 5451.404: 99.9080% ( 2) 00:32:20.854 5451.404 - 5481.193: 99.9115% ( 2) 00:32:20.854 5481.193 - 5510.982: 99.9149% ( 2) 00:32:20.854 5510.982 - 5540.771: 99.9201% ( 3) 00:32:20.854 5540.771 - 5570.560: 99.9236% ( 2) 00:32:20.854 5570.560 - 5600.349: 99.9271% ( 2) 00:32:20.854 5600.349 - 5630.138: 99.9306% ( 2) 00:32:20.854 5630.138 - 5659.927: 99.9340% ( 2) 00:32:20.854 5659.927 - 5689.716: 99.9375% ( 2) 00:32:20.854 5689.716 - 5719.505: 99.9410% ( 2) 00:32:20.854 5719.505 - 5749.295: 99.9462% ( 3) 00:32:20.854 5749.295 - 5779.084: 99.9497% ( 2) 00:32:20.854 5779.084 - 5808.873: 99.9531% ( 2) 00:32:20.854 5808.873 - 5838.662: 99.9566% ( 2) 00:32:20.854 5838.662 - 5868.451: 99.9601% ( 2) 00:32:20.854 5868.451 - 5898.240: 99.9635% ( 2) 00:32:20.854 5898.240 - 5928.029: 99.9670% ( 2) 00:32:20.854 5928.029 - 5957.818: 99.9705% ( 2) 00:32:20.854 5957.818 - 5987.607: 99.9740% ( 2) 00:32:20.854 5987.607 - 6017.396: 99.9774% ( 2) 00:32:20.854 6017.396 - 6047.185: 99.9826% ( 3) 00:32:20.854 6047.185 - 6076.975: 99.9844% ( 1) 00:32:20.854 6076.975 - 6106.764: 99.9861% ( 1) 00:32:20.854 6106.764 - 6136.553: 99.9878% ( 1) 00:32:20.854 6136.553 - 6166.342: 99.9896% ( 1) 00:32:20.854 6166.342 - 6196.131: 99.9931% ( 2) 00:32:20.854 6196.131 - 6225.920: 99.9948% ( 1) 00:32:20.854 6225.920 - 6255.709: 99.9965% ( 1) 00:32:20.854 6255.709 - 6285.498: 99.9983% ( 1) 00:32:20.854 6315.287 - 6345.076: 100.0000% ( 1) 00:32:20.854 00:32:20.854 00:51:54 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:32:20.854 00:32:20.854 real 0m2.669s 00:32:20.854 user 0m2.226s 00:32:20.854 sys 0m0.287s 00:32:20.854 00:51:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:20.854 ************************************ 00:32:20.854 END TEST nvme_perf 00:32:20.854 ************************************ 00:32:20.854 00:51:54 -- common/autotest_common.sh@10 -- # set +x 00:32:20.854 00:51:54 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:20.855 00:51:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:32:20.855 00:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:20.855 00:51:54 -- common/autotest_common.sh@10 -- # set +x 00:32:20.855 ************************************ 00:32:20.855 START TEST nvme_hello_world 00:32:20.855 ************************************ 00:32:20.855 00:51:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:32:21.113 Initializing NVMe Controllers 00:32:21.113 Attached to 0000:00:10.0 00:32:21.113 Namespace ID: 1 size: 5GB 00:32:21.113 Initialization complete. 00:32:21.113 INFO: using host memory buffer for IO 00:32:21.113 Hello world! 00:32:21.113 00:32:21.113 real 0m0.335s 00:32:21.113 user 0m0.105s 00:32:21.113 sys 0m0.146s 00:32:21.113 00:51:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:21.113 ************************************ 00:32:21.113 END TEST nvme_hello_world 00:32:21.113 ************************************ 00:32:21.113 00:51:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.113 00:51:54 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:21.113 00:51:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:21.113 00:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:21.113 00:51:54 -- common/autotest_common.sh@10 -- # set +x 00:32:21.371 ************************************ 00:32:21.371 START TEST nvme_sgl 00:32:21.371 ************************************ 00:32:21.371 00:51:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:32:21.630 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:32:21.630 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:32:21.630 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:32:21.630 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:32:21.630 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:32:21.630 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:32:21.630 NVMe Readv/Writev Request test 00:32:21.630 Attached to 0000:00:10.0 00:32:21.630 0000:00:10.0: build_io_request_2 test passed 00:32:21.630 0000:00:10.0: build_io_request_4 test passed 00:32:21.630 0000:00:10.0: build_io_request_5 test passed 00:32:21.630 0000:00:10.0: build_io_request_6 test passed 00:32:21.630 0000:00:10.0: build_io_request_7 test passed 00:32:21.630 0000:00:10.0: build_io_request_10 test passed 00:32:21.630 Cleaning up... 00:32:21.630 ************************************ 00:32:21.630 END TEST nvme_sgl 00:32:21.630 ************************************ 00:32:21.630 00:32:21.630 real 0m0.381s 00:32:21.630 user 0m0.176s 00:32:21.630 sys 0m0.128s 00:32:21.630 00:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:21.630 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:32:21.630 00:51:55 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:21.630 00:51:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:21.630 00:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:21.630 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:32:21.630 ************************************ 00:32:21.630 START TEST nvme_e2edp 00:32:21.630 ************************************ 00:32:21.630 00:51:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:32:22.198 NVMe Write/Read with End-to-End data protection test 00:32:22.198 Attached to 0000:00:10.0 00:32:22.198 Cleaning up... 00:32:22.198 ************************************ 00:32:22.198 END TEST nvme_e2edp 00:32:22.198 ************************************ 00:32:22.198 00:32:22.198 real 0m0.327s 00:32:22.198 user 0m0.121s 00:32:22.198 sys 0m0.129s 00:32:22.198 00:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:22.198 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:32:22.198 00:51:55 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:22.198 00:51:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:22.198 00:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:22.198 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:32:22.198 ************************************ 00:32:22.198 START TEST nvme_reserve 00:32:22.198 ************************************ 00:32:22.198 00:51:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:32:22.457 ===================================================== 00:32:22.457 NVMe Controller at PCI bus 0, device 16, function 0 00:32:22.457 ===================================================== 00:32:22.457 Reservations: Not Supported 00:32:22.457 Reservation test passed 00:32:22.457 ************************************ 00:32:22.457 END TEST nvme_reserve 00:32:22.457 ************************************ 00:32:22.457 00:32:22.457 real 0m0.323s 00:32:22.457 user 0m0.124s 00:32:22.457 sys 0m0.133s 00:32:22.457 00:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:22.457 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:32:22.457 00:51:55 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:22.457 00:51:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:22.457 00:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:22.457 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:32:22.457 ************************************ 00:32:22.457 START TEST nvme_err_injection 00:32:22.457 ************************************ 00:32:22.457 00:51:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:32:23.025 NVMe Error Injection test 00:32:23.025 Attached to 0000:00:10.0 00:32:23.025 0000:00:10.0: get features failed as expected 00:32:23.025 0000:00:10.0: get features successfully as expected 00:32:23.025 0000:00:10.0: read failed as expected 00:32:23.025 0000:00:10.0: read successfully as expected 00:32:23.025 Cleaning up... 00:32:23.025 ************************************ 00:32:23.025 END TEST nvme_err_injection 00:32:23.025 ************************************ 00:32:23.025 00:32:23.025 real 0m0.323s 00:32:23.025 user 0m0.130s 00:32:23.025 sys 0m0.124s 00:32:23.025 00:51:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:23.025 00:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:23.025 00:51:56 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:23.025 00:51:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:32:23.025 00:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:23.025 00:51:56 -- common/autotest_common.sh@10 -- # set +x 00:32:23.025 ************************************ 00:32:23.025 START TEST nvme_overhead 00:32:23.025 ************************************ 00:32:23.025 00:51:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:32:24.403 Initializing NVMe Controllers 00:32:24.403 Attached to 0000:00:10.0 00:32:24.403 Initialization complete. Launching workers. 00:32:24.403 submit (in ns) avg, min, max = 15923.6, 12426.8, 223109.1 00:32:24.403 complete (in ns) avg, min, max = 10809.5, 8149.1, 244470.0 00:32:24.403 00:32:24.403 Submit histogram 00:32:24.403 ================ 00:32:24.403 Range in us Cumulative Count 00:32:24.403 12.393 - 12.451: 0.0114% ( 1) 00:32:24.403 12.684 - 12.742: 0.0229% ( 1) 00:32:24.403 12.800 - 12.858: 0.0458% ( 2) 00:32:24.403 12.916 - 12.975: 0.0686% ( 2) 00:32:24.403 12.975 - 13.033: 0.1144% ( 4) 00:32:24.403 13.033 - 13.091: 0.1945% ( 7) 00:32:24.403 13.091 - 13.149: 0.3089% ( 10) 00:32:24.403 13.149 - 13.207: 0.6635% ( 31) 00:32:24.403 13.207 - 13.265: 1.0411% ( 33) 00:32:24.403 13.265 - 13.324: 1.6016% ( 49) 00:32:24.403 13.324 - 13.382: 2.3796% ( 68) 00:32:24.403 13.382 - 13.440: 3.3063% ( 81) 00:32:24.403 13.440 - 13.498: 4.5189% ( 106) 00:32:24.403 13.498 - 13.556: 6.4638% ( 170) 00:32:24.403 13.556 - 13.615: 8.1798% ( 150) 00:32:24.403 13.615 - 13.673: 9.9073% ( 151) 00:32:24.403 13.673 - 13.731: 11.6005% ( 148) 00:32:24.403 13.731 - 13.789: 13.8657% ( 198) 00:32:24.403 13.789 - 13.847: 17.8584% ( 349) 00:32:24.403 13.847 - 13.905: 25.3747% ( 657) 00:32:24.403 13.905 - 13.964: 33.4401% ( 705) 00:32:24.403 13.964 - 14.022: 40.4302% ( 611) 00:32:24.403 14.022 - 14.080: 45.4067% ( 435) 00:32:24.403 14.080 - 14.138: 49.0447% ( 318) 00:32:24.403 14.138 - 14.196: 52.0993% ( 267) 00:32:24.403 14.196 - 14.255: 54.3874% ( 200) 00:32:24.403 14.255 - 14.313: 56.5954% ( 193) 00:32:24.403 14.313 - 14.371: 58.4487% ( 162) 00:32:24.403 14.371 - 14.429: 59.8787% ( 125) 00:32:24.403 14.429 - 14.487: 60.8626% ( 86) 00:32:24.403 14.487 - 14.545: 61.6634% ( 70) 00:32:24.403 14.545 - 14.604: 62.2469% ( 51) 00:32:24.403 14.604 - 14.662: 62.7503% ( 44) 00:32:24.403 14.662 - 14.720: 63.1507% ( 35) 00:32:24.403 14.720 - 14.778: 63.7227% ( 50) 00:32:24.403 14.778 - 14.836: 64.1689% ( 39) 00:32:24.403 14.836 - 14.895: 64.5922% ( 37) 00:32:24.403 14.895 - 15.011: 65.2557% ( 58) 00:32:24.403 15.011 - 15.127: 65.7819% ( 46) 00:32:24.403 15.127 - 15.244: 66.0336% ( 22) 00:32:24.403 15.244 - 15.360: 66.2167% ( 16) 00:32:24.403 15.360 - 15.476: 66.2968% ( 7) 00:32:24.403 15.476 - 15.593: 66.3883% ( 8) 00:32:24.403 15.593 - 15.709: 66.4684% ( 7) 00:32:24.403 15.709 - 15.825: 66.5370% ( 6) 00:32:24.403 15.825 - 15.942: 66.6057% ( 6) 00:32:24.403 15.942 - 16.058: 66.6285% ( 2) 00:32:24.403 16.175 - 16.291: 66.6514% ( 2) 00:32:24.403 16.291 - 16.407: 66.6743% ( 2) 00:32:24.403 16.407 - 16.524: 66.7315% ( 5) 00:32:24.403 16.524 - 16.640: 66.7773% ( 4) 00:32:24.403 16.756 - 16.873: 66.8001% ( 2) 00:32:24.403 16.873 - 16.989: 68.3789% ( 138) 00:32:24.403 16.989 - 17.105: 75.9181% ( 659) 00:32:24.403 17.105 - 17.222: 83.1941% ( 636) 00:32:24.403 17.222 - 17.338: 86.1229% ( 256) 00:32:24.403 17.338 - 17.455: 87.5987% ( 129) 00:32:24.403 17.455 - 17.571: 88.9486% ( 118) 00:32:24.403 17.571 - 17.687: 89.5779% ( 55) 00:32:24.403 17.687 - 17.804: 89.8867% ( 27) 00:32:24.403 17.804 - 17.920: 90.1727% ( 25) 00:32:24.403 17.920 - 18.036: 90.6189% ( 39) 00:32:24.403 18.036 - 18.153: 91.0651% ( 39) 00:32:24.403 18.153 - 18.269: 91.3854% ( 28) 00:32:24.403 18.269 - 18.385: 91.6257% ( 21) 00:32:24.403 18.385 - 18.502: 91.8087% ( 16) 00:32:24.403 18.502 - 18.618: 91.9231% ( 10) 00:32:24.403 18.618 - 18.735: 92.0146% ( 8) 00:32:24.403 18.735 - 18.851: 92.0947% ( 7) 00:32:24.403 18.851 - 18.967: 92.2435% ( 13) 00:32:24.403 18.967 - 19.084: 92.3121% ( 6) 00:32:24.403 19.084 - 19.200: 92.3922% ( 7) 00:32:24.403 19.200 - 19.316: 92.4837% ( 8) 00:32:24.403 19.316 - 19.433: 92.5752% ( 8) 00:32:24.403 19.433 - 19.549: 92.5981% ( 2) 00:32:24.403 19.549 - 19.665: 92.6324% ( 3) 00:32:24.403 19.665 - 19.782: 92.6896% ( 5) 00:32:24.403 19.782 - 19.898: 92.7697% ( 7) 00:32:24.403 19.898 - 20.015: 92.8155% ( 4) 00:32:24.403 20.015 - 20.131: 92.8612% ( 4) 00:32:24.403 20.131 - 20.247: 92.8727% ( 1) 00:32:24.403 20.247 - 20.364: 92.9413% ( 6) 00:32:24.403 20.364 - 20.480: 92.9528% ( 1) 00:32:24.403 20.480 - 20.596: 92.9985% ( 4) 00:32:24.403 20.596 - 20.713: 93.0557% ( 5) 00:32:24.403 20.713 - 20.829: 93.0900% ( 3) 00:32:24.403 20.829 - 20.945: 93.1358% ( 4) 00:32:24.403 20.945 - 21.062: 93.2388% ( 9) 00:32:24.403 21.062 - 21.178: 93.2960% ( 5) 00:32:24.403 21.178 - 21.295: 93.3303% ( 3) 00:32:24.403 21.295 - 21.411: 93.3532% ( 2) 00:32:24.403 21.411 - 21.527: 93.3760% ( 2) 00:32:24.403 21.527 - 21.644: 93.3989% ( 2) 00:32:24.403 21.644 - 21.760: 93.4676% ( 6) 00:32:24.403 21.760 - 21.876: 93.5019% ( 3) 00:32:24.403 21.876 - 21.993: 93.5476% ( 4) 00:32:24.403 21.993 - 22.109: 93.6049% ( 5) 00:32:24.403 22.109 - 22.225: 93.6163% ( 1) 00:32:24.403 22.225 - 22.342: 93.6506% ( 3) 00:32:24.403 22.342 - 22.458: 93.7307% ( 7) 00:32:24.403 22.458 - 22.575: 93.7765% ( 4) 00:32:24.403 22.575 - 22.691: 93.8108% ( 3) 00:32:24.403 22.691 - 22.807: 93.8222% ( 1) 00:32:24.403 22.807 - 22.924: 93.8565% ( 3) 00:32:24.403 22.924 - 23.040: 93.8794% ( 2) 00:32:24.403 23.040 - 23.156: 93.9481% ( 6) 00:32:24.403 23.156 - 23.273: 94.0053% ( 5) 00:32:24.403 23.273 - 23.389: 94.0510% ( 4) 00:32:24.403 23.389 - 23.505: 94.0968% ( 4) 00:32:24.403 23.505 - 23.622: 94.1540% ( 5) 00:32:24.403 23.622 - 23.738: 94.1997% ( 4) 00:32:24.403 23.738 - 23.855: 94.2684% ( 6) 00:32:24.403 23.855 - 23.971: 94.3142% ( 4) 00:32:24.403 23.971 - 24.087: 94.3256% ( 1) 00:32:24.403 24.087 - 24.204: 94.3370% ( 1) 00:32:24.403 24.204 - 24.320: 94.3714% ( 3) 00:32:24.403 24.436 - 24.553: 94.4286% ( 5) 00:32:24.403 24.553 - 24.669: 94.4514% ( 2) 00:32:24.403 24.669 - 24.785: 94.4858% ( 3) 00:32:24.403 24.785 - 24.902: 94.5658% ( 7) 00:32:24.403 24.902 - 25.018: 94.5887% ( 2) 00:32:24.403 25.018 - 25.135: 94.6230% ( 3) 00:32:24.403 25.135 - 25.251: 94.6574% ( 3) 00:32:24.403 25.251 - 25.367: 94.6688% ( 1) 00:32:24.403 25.367 - 25.484: 94.6917% ( 2) 00:32:24.403 25.484 - 25.600: 94.7489% ( 5) 00:32:24.403 25.600 - 25.716: 94.7603% ( 1) 00:32:24.403 25.716 - 25.833: 94.7832% ( 2) 00:32:24.403 25.833 - 25.949: 94.8290% ( 4) 00:32:24.403 25.949 - 26.065: 94.8976% ( 6) 00:32:24.403 26.065 - 26.182: 94.9434% ( 4) 00:32:24.403 26.182 - 26.298: 94.9891% ( 4) 00:32:24.403 26.298 - 26.415: 95.0006% ( 1) 00:32:24.403 26.415 - 26.531: 95.0349% ( 3) 00:32:24.403 26.531 - 26.647: 95.0692% ( 3) 00:32:24.403 26.647 - 26.764: 95.0921% ( 2) 00:32:24.403 26.764 - 26.880: 95.1264% ( 3) 00:32:24.403 26.880 - 26.996: 95.1379% ( 1) 00:32:24.403 26.996 - 27.113: 95.1493% ( 1) 00:32:24.403 27.113 - 27.229: 95.1607% ( 1) 00:32:24.403 27.229 - 27.345: 95.1722% ( 1) 00:32:24.403 27.345 - 27.462: 95.1951% ( 2) 00:32:24.403 27.462 - 27.578: 95.2408% ( 4) 00:32:24.403 27.578 - 27.695: 95.2751% ( 3) 00:32:24.403 27.695 - 27.811: 95.3209% ( 4) 00:32:24.403 27.811 - 27.927: 95.3781% ( 5) 00:32:24.403 27.927 - 28.044: 95.4124% ( 3) 00:32:24.403 28.044 - 28.160: 95.4582% ( 4) 00:32:24.403 28.160 - 28.276: 95.4925% ( 3) 00:32:24.403 28.276 - 28.393: 95.5497% ( 5) 00:32:24.403 28.393 - 28.509: 95.6984% ( 13) 00:32:24.403 28.509 - 28.625: 95.8700% ( 15) 00:32:24.403 28.625 - 28.742: 96.0874% ( 19) 00:32:24.403 28.742 - 28.858: 96.4878% ( 35) 00:32:24.403 28.858 - 28.975: 96.9683% ( 42) 00:32:24.403 28.975 - 29.091: 97.4259% ( 40) 00:32:24.403 29.091 - 29.207: 97.8149% ( 34) 00:32:24.403 29.207 - 29.324: 98.2267% ( 36) 00:32:24.403 29.324 - 29.440: 98.5814% ( 31) 00:32:24.403 29.440 - 29.556: 98.8674% ( 25) 00:32:24.403 29.556 - 29.673: 99.0161% ( 13) 00:32:24.403 29.673 - 29.789: 99.0848% ( 6) 00:32:24.403 29.789 - 30.022: 99.2335% ( 13) 00:32:24.403 30.022 - 30.255: 99.3021% ( 6) 00:32:24.403 30.255 - 30.487: 99.3822% ( 7) 00:32:24.404 30.487 - 30.720: 99.4394% ( 5) 00:32:24.404 30.720 - 30.953: 99.4509% ( 1) 00:32:24.404 30.953 - 31.185: 99.4852% ( 3) 00:32:24.404 31.185 - 31.418: 99.5081% ( 2) 00:32:24.404 31.418 - 31.651: 99.5195% ( 1) 00:32:24.404 31.651 - 31.884: 99.5309% ( 1) 00:32:24.404 31.884 - 32.116: 99.5424% ( 1) 00:32:24.404 32.582 - 32.815: 99.5538% ( 1) 00:32:24.404 33.280 - 33.513: 99.5653% ( 1) 00:32:24.404 33.513 - 33.745: 99.5881% ( 2) 00:32:24.404 34.211 - 34.444: 99.5996% ( 1) 00:32:24.404 34.676 - 34.909: 99.6110% ( 1) 00:32:24.404 34.909 - 35.142: 99.6225% ( 1) 00:32:24.404 35.375 - 35.607: 99.6453% ( 2) 00:32:24.404 36.073 - 36.305: 99.6682% ( 2) 00:32:24.404 36.305 - 36.538: 99.6797% ( 1) 00:32:24.404 36.538 - 36.771: 99.7254% ( 4) 00:32:24.404 36.771 - 37.004: 99.7369% ( 1) 00:32:24.404 37.004 - 37.236: 99.7483% ( 1) 00:32:24.404 37.469 - 37.702: 99.7712% ( 2) 00:32:24.404 37.702 - 37.935: 99.7941% ( 2) 00:32:24.404 37.935 - 38.167: 99.8055% ( 1) 00:32:24.404 38.633 - 38.865: 99.8170% ( 1) 00:32:24.404 39.796 - 40.029: 99.8284% ( 1) 00:32:24.404 40.262 - 40.495: 99.8398% ( 1) 00:32:24.404 43.055 - 43.287: 99.8513% ( 1) 00:32:24.404 45.382 - 45.615: 99.8627% ( 1) 00:32:24.404 46.313 - 46.545: 99.8742% ( 1) 00:32:24.404 47.709 - 47.942: 99.8856% ( 1) 00:32:24.404 48.407 - 48.640: 99.8970% ( 1) 00:32:24.404 55.855 - 56.087: 99.9085% ( 1) 00:32:24.404 56.553 - 56.785: 99.9199% ( 1) 00:32:24.404 68.422 - 68.887: 99.9314% ( 1) 00:32:24.404 69.353 - 69.818: 99.9428% ( 1) 00:32:24.404 72.611 - 73.076: 99.9542% ( 1) 00:32:24.404 82.851 - 83.316: 99.9657% ( 1) 00:32:24.404 116.829 - 117.295: 99.9771% ( 1) 00:32:24.404 123.811 - 124.742: 99.9886% ( 1) 00:32:24.404 222.487 - 223.418: 100.0000% ( 1) 00:32:24.404 00:32:24.404 Complete histogram 00:32:24.404 ================== 00:32:24.404 Range in us Cumulative Count 00:32:24.404 8.145 - 8.204: 0.0229% ( 2) 00:32:24.404 8.262 - 8.320: 0.0458% ( 2) 00:32:24.404 8.320 - 8.378: 0.0572% ( 1) 00:32:24.404 8.378 - 8.436: 0.4004% ( 30) 00:32:24.404 8.436 - 8.495: 1.2012% ( 70) 00:32:24.404 8.495 - 8.553: 1.7847% ( 51) 00:32:24.404 8.553 - 8.611: 2.0821% ( 26) 00:32:24.404 8.611 - 8.669: 2.4597% ( 33) 00:32:24.404 8.669 - 8.727: 4.9422% ( 217) 00:32:24.404 8.727 - 8.785: 7.3447% ( 210) 00:32:24.404 8.785 - 8.844: 8.6603% ( 115) 00:32:24.404 8.844 - 8.902: 9.2896% ( 55) 00:32:24.404 8.902 - 8.960: 11.9208% ( 230) 00:32:24.404 8.960 - 9.018: 28.6123% ( 1459) 00:32:24.404 9.018 - 9.076: 45.5097% ( 1477) 00:32:24.404 9.076 - 9.135: 52.0307% ( 570) 00:32:24.404 9.135 - 9.193: 54.1471% ( 185) 00:32:24.404 9.193 - 9.251: 55.6000% ( 127) 00:32:24.404 9.251 - 9.309: 58.6317% ( 265) 00:32:24.404 9.309 - 9.367: 60.5766% ( 170) 00:32:24.404 9.367 - 9.425: 61.4575% ( 77) 00:32:24.404 9.425 - 9.484: 61.9266% ( 41) 00:32:24.404 9.484 - 9.542: 62.3956% ( 41) 00:32:24.404 9.542 - 9.600: 62.6244% ( 20) 00:32:24.404 9.600 - 9.658: 62.8418% ( 19) 00:32:24.404 9.658 - 9.716: 63.1392% ( 26) 00:32:24.404 9.716 - 9.775: 63.3108% ( 15) 00:32:24.404 9.775 - 9.833: 63.5511% ( 21) 00:32:24.404 9.833 - 9.891: 63.6998% ( 13) 00:32:24.404 9.891 - 9.949: 63.9401% ( 21) 00:32:24.404 9.949 - 10.007: 64.1117% ( 15) 00:32:24.404 10.007 - 10.065: 64.2718% ( 14) 00:32:24.404 10.065 - 10.124: 64.4434% ( 15) 00:32:24.404 10.124 - 10.182: 64.4549% ( 1) 00:32:24.404 10.182 - 10.240: 64.5350% ( 7) 00:32:24.404 10.240 - 10.298: 64.6150% ( 7) 00:32:24.404 10.298 - 10.356: 64.6608% ( 4) 00:32:24.404 10.356 - 10.415: 64.7409% ( 7) 00:32:24.404 10.415 - 10.473: 64.7752% ( 3) 00:32:24.404 10.473 - 10.531: 64.7981% ( 2) 00:32:24.404 10.531 - 10.589: 64.8438% ( 4) 00:32:24.404 10.589 - 10.647: 64.9354% ( 8) 00:32:24.404 10.647 - 10.705: 65.0040% ( 6) 00:32:24.404 10.705 - 10.764: 65.0726% ( 6) 00:32:24.404 10.764 - 10.822: 65.1642% ( 8) 00:32:24.404 10.822 - 10.880: 65.3129% ( 13) 00:32:24.404 10.880 - 10.938: 65.4273% ( 10) 00:32:24.404 10.938 - 10.996: 65.5417% ( 10) 00:32:24.404 10.996 - 11.055: 65.6904% ( 13) 00:32:24.404 11.055 - 11.113: 65.7705% ( 7) 00:32:24.404 11.113 - 11.171: 65.9421% ( 15) 00:32:24.404 11.171 - 11.229: 68.0243% ( 182) 00:32:24.404 11.229 - 11.287: 77.4397% ( 823) 00:32:24.404 11.287 - 11.345: 86.1457% ( 761) 00:32:24.404 11.345 - 11.404: 89.1431% ( 262) 00:32:24.404 11.404 - 11.462: 90.0812% ( 82) 00:32:24.404 11.462 - 11.520: 90.4816% ( 35) 00:32:24.404 11.520 - 11.578: 90.7104% ( 20) 00:32:24.404 11.578 - 11.636: 90.8477% ( 12) 00:32:24.404 11.636 - 11.695: 90.9621% ( 10) 00:32:24.404 11.695 - 11.753: 91.1909% ( 20) 00:32:24.404 11.753 - 11.811: 91.3625% ( 15) 00:32:24.404 11.811 - 11.869: 91.4769% ( 10) 00:32:24.404 11.869 - 11.927: 91.5227% ( 4) 00:32:24.404 11.927 - 11.985: 91.5799% ( 5) 00:32:24.404 11.985 - 12.044: 91.5914% ( 1) 00:32:24.404 12.044 - 12.102: 91.6028% ( 1) 00:32:24.404 12.102 - 12.160: 91.6600% ( 5) 00:32:24.404 12.160 - 12.218: 91.7058% ( 4) 00:32:24.404 12.218 - 12.276: 91.7744% ( 6) 00:32:24.404 12.276 - 12.335: 91.8545% ( 7) 00:32:24.404 12.335 - 12.393: 91.9231% ( 6) 00:32:24.404 12.393 - 12.451: 91.9689% ( 4) 00:32:24.404 12.451 - 12.509: 92.0833% ( 10) 00:32:24.404 12.509 - 12.567: 92.1405% ( 5) 00:32:24.404 12.625 - 12.684: 92.1634% ( 2) 00:32:24.404 12.684 - 12.742: 92.2091% ( 4) 00:32:24.404 12.742 - 12.800: 92.2320% ( 2) 00:32:24.404 12.800 - 12.858: 92.2435% ( 1) 00:32:24.404 12.858 - 12.916: 92.2778% ( 3) 00:32:24.404 12.916 - 12.975: 92.3121% ( 3) 00:32:24.404 12.975 - 13.033: 92.3350% ( 2) 00:32:24.404 13.033 - 13.091: 92.3464% ( 1) 00:32:24.404 13.091 - 13.149: 92.3579% ( 1) 00:32:24.404 13.149 - 13.207: 92.4036% ( 4) 00:32:24.404 13.207 - 13.265: 92.4494% ( 4) 00:32:24.404 13.265 - 13.324: 92.5066% ( 5) 00:32:24.404 13.324 - 13.382: 92.5409% ( 3) 00:32:24.404 13.382 - 13.440: 92.5523% ( 1) 00:32:24.404 13.440 - 13.498: 92.5752% ( 2) 00:32:24.404 13.498 - 13.556: 92.5867% ( 1) 00:32:24.404 13.556 - 13.615: 92.6439% ( 5) 00:32:24.404 13.615 - 13.673: 92.6667% ( 2) 00:32:24.404 13.673 - 13.731: 92.7926% ( 11) 00:32:24.404 13.731 - 13.789: 92.8841% ( 8) 00:32:24.404 13.789 - 13.847: 92.9871% ( 9) 00:32:24.404 13.847 - 13.905: 93.1244% ( 12) 00:32:24.404 13.905 - 13.964: 93.1701% ( 4) 00:32:24.404 13.964 - 14.022: 93.2273% ( 5) 00:32:24.404 14.022 - 14.080: 93.2845% ( 5) 00:32:24.404 14.080 - 14.138: 93.3303% ( 4) 00:32:24.404 14.138 - 14.196: 93.3532% ( 2) 00:32:24.404 14.196 - 14.255: 93.3760% ( 2) 00:32:24.404 14.313 - 14.371: 93.3989% ( 2) 00:32:24.404 14.429 - 14.487: 93.4218% ( 2) 00:32:24.404 14.487 - 14.545: 93.4561% ( 3) 00:32:24.404 14.545 - 14.604: 93.5133% ( 5) 00:32:24.404 14.662 - 14.720: 93.5362% ( 2) 00:32:24.404 14.720 - 14.778: 93.5591% ( 2) 00:32:24.404 14.778 - 14.836: 93.5705% ( 1) 00:32:24.404 14.836 - 14.895: 93.5820% ( 1) 00:32:24.404 14.895 - 15.011: 93.6277% ( 4) 00:32:24.404 15.127 - 15.244: 93.6621% ( 3) 00:32:24.404 15.244 - 15.360: 93.6735% ( 1) 00:32:24.404 15.360 - 15.476: 93.6849% ( 1) 00:32:24.404 15.476 - 15.593: 93.7078% ( 2) 00:32:24.404 15.593 - 15.709: 93.7536% ( 4) 00:32:24.404 15.709 - 15.825: 93.8222% ( 6) 00:32:24.404 15.825 - 15.942: 93.8680% ( 4) 00:32:24.404 15.942 - 16.058: 93.9023% ( 3) 00:32:24.404 16.058 - 16.175: 93.9481% ( 4) 00:32:24.404 16.175 - 16.291: 93.9709% ( 2) 00:32:24.404 16.291 - 16.407: 94.0396% ( 6) 00:32:24.404 16.407 - 16.524: 94.0739% ( 3) 00:32:24.404 16.524 - 16.640: 94.1082% ( 3) 00:32:24.404 16.640 - 16.756: 94.1197% ( 1) 00:32:24.404 16.756 - 16.873: 94.1540% ( 3) 00:32:24.404 16.873 - 16.989: 94.1769% ( 2) 00:32:24.404 16.989 - 17.105: 94.2112% ( 3) 00:32:24.404 17.105 - 17.222: 94.2684% ( 5) 00:32:24.404 17.222 - 17.338: 94.3142% ( 4) 00:32:24.404 17.338 - 17.455: 94.3370% ( 2) 00:32:24.404 17.455 - 17.571: 94.3714% ( 3) 00:32:24.404 17.804 - 17.920: 94.3942% ( 2) 00:32:24.404 18.036 - 18.153: 94.4171% ( 2) 00:32:24.404 18.153 - 18.269: 94.4858% ( 6) 00:32:24.404 18.269 - 18.385: 94.5201% ( 3) 00:32:24.404 18.385 - 18.502: 94.5430% ( 2) 00:32:24.404 18.502 - 18.618: 94.5544% ( 1) 00:32:24.404 18.618 - 18.735: 94.5887% ( 3) 00:32:24.404 18.851 - 18.967: 94.6116% ( 2) 00:32:24.404 18.967 - 19.084: 94.6459% ( 3) 00:32:24.404 19.084 - 19.200: 94.6802% ( 3) 00:32:24.404 19.200 - 19.316: 94.6917% ( 1) 00:32:24.404 19.316 - 19.433: 94.7031% ( 1) 00:32:24.405 19.433 - 19.549: 94.7374% ( 3) 00:32:24.405 19.549 - 19.665: 94.7603% ( 2) 00:32:24.405 19.665 - 19.782: 94.7946% ( 3) 00:32:24.405 19.782 - 19.898: 94.8175% ( 2) 00:32:24.405 19.898 - 20.015: 94.8404% ( 2) 00:32:24.405 20.015 - 20.131: 94.8518% ( 1) 00:32:24.405 20.131 - 20.247: 94.8862% ( 3) 00:32:24.405 20.247 - 20.364: 94.9090% ( 2) 00:32:24.405 20.364 - 20.480: 94.9663% ( 5) 00:32:24.405 20.480 - 20.596: 95.0349% ( 6) 00:32:24.405 20.596 - 20.713: 95.0463% ( 1) 00:32:24.405 20.713 - 20.829: 95.0692% ( 2) 00:32:24.405 20.829 - 20.945: 95.1035% ( 3) 00:32:24.405 20.945 - 21.062: 95.1150% ( 1) 00:32:24.405 21.062 - 21.178: 95.1379% ( 2) 00:32:24.405 21.178 - 21.295: 95.1722% ( 3) 00:32:24.405 21.295 - 21.411: 95.1951% ( 2) 00:32:24.405 21.411 - 21.527: 95.2179% ( 2) 00:32:24.405 21.527 - 21.644: 95.2523% ( 3) 00:32:24.405 21.644 - 21.760: 95.2751% ( 2) 00:32:24.405 21.760 - 21.876: 95.2866% ( 1) 00:32:24.405 21.876 - 21.993: 95.3323% ( 4) 00:32:24.405 21.993 - 22.109: 95.3781% ( 4) 00:32:24.405 22.109 - 22.225: 95.4010% ( 2) 00:32:24.405 22.342 - 22.458: 95.4353% ( 3) 00:32:24.405 22.575 - 22.691: 95.4467% ( 1) 00:32:24.405 22.691 - 22.807: 95.4696% ( 2) 00:32:24.405 22.924 - 23.040: 95.4925% ( 2) 00:32:24.405 23.156 - 23.273: 95.5039% ( 1) 00:32:24.405 23.273 - 23.389: 95.5268% ( 2) 00:32:24.405 23.389 - 23.505: 95.6069% ( 7) 00:32:24.405 23.505 - 23.622: 95.6641% ( 5) 00:32:24.405 23.622 - 23.738: 95.8357% ( 15) 00:32:24.405 23.738 - 23.855: 96.2018% ( 32) 00:32:24.405 23.855 - 23.971: 96.6251% ( 37) 00:32:24.405 23.971 - 24.087: 97.2658% ( 56) 00:32:24.405 24.087 - 24.204: 97.7920% ( 46) 00:32:24.405 24.204 - 24.320: 98.3297% ( 47) 00:32:24.405 24.320 - 24.436: 98.6157% ( 25) 00:32:24.405 24.436 - 24.553: 98.9704% ( 31) 00:32:24.405 24.553 - 24.669: 99.0962% ( 11) 00:32:24.405 24.669 - 24.785: 99.1763% ( 7) 00:32:24.405 24.785 - 24.902: 99.1992% ( 2) 00:32:24.405 24.902 - 25.018: 99.2221% ( 2) 00:32:24.405 25.018 - 25.135: 99.2564% ( 3) 00:32:24.405 25.135 - 25.251: 99.3250% ( 6) 00:32:24.405 25.251 - 25.367: 99.3365% ( 1) 00:32:24.405 25.484 - 25.600: 99.3708% ( 3) 00:32:24.405 25.600 - 25.716: 99.4165% ( 4) 00:32:24.405 25.716 - 25.833: 99.4394% ( 2) 00:32:24.405 25.833 - 25.949: 99.4737% ( 3) 00:32:24.405 26.065 - 26.182: 99.4966% ( 2) 00:32:24.405 26.298 - 26.415: 99.5081% ( 1) 00:32:24.405 26.531 - 26.647: 99.5195% ( 1) 00:32:24.405 27.578 - 27.695: 99.5309% ( 1) 00:32:24.405 27.695 - 27.811: 99.5424% ( 1) 00:32:24.405 27.927 - 28.044: 99.5538% ( 1) 00:32:24.405 28.044 - 28.160: 99.5653% ( 1) 00:32:24.405 28.858 - 28.975: 99.5767% ( 1) 00:32:24.405 29.556 - 29.673: 99.5881% ( 1) 00:32:24.405 30.487 - 30.720: 99.5996% ( 1) 00:32:24.405 30.720 - 30.953: 99.6225% ( 2) 00:32:24.405 30.953 - 31.185: 99.6339% ( 1) 00:32:24.405 31.185 - 31.418: 99.6453% ( 1) 00:32:24.405 31.418 - 31.651: 99.6568% ( 1) 00:32:24.405 32.116 - 32.349: 99.6682% ( 1) 00:32:24.405 32.349 - 32.582: 99.6797% ( 1) 00:32:24.405 33.280 - 33.513: 99.6911% ( 1) 00:32:24.405 34.211 - 34.444: 99.7026% ( 1) 00:32:24.405 35.142 - 35.375: 99.7140% ( 1) 00:32:24.405 39.098 - 39.331: 99.7254% ( 1) 00:32:24.405 40.495 - 40.727: 99.7369% ( 1) 00:32:24.405 41.425 - 41.658: 99.7483% ( 1) 00:32:24.405 44.218 - 44.451: 99.7598% ( 1) 00:32:24.405 47.244 - 47.476: 99.7712% ( 1) 00:32:24.405 50.269 - 50.502: 99.7826% ( 1) 00:32:24.405 54.225 - 54.458: 99.7941% ( 1) 00:32:24.405 57.716 - 57.949: 99.8055% ( 1) 00:32:24.405 59.345 - 59.578: 99.8170% ( 1) 00:32:24.405 65.629 - 66.095: 99.8398% ( 2) 00:32:24.405 67.025 - 67.491: 99.8513% ( 1) 00:32:24.405 69.353 - 69.818: 99.8627% ( 1) 00:32:24.405 74.007 - 74.473: 99.8742% ( 1) 00:32:24.405 85.178 - 85.644: 99.8856% ( 1) 00:32:24.405 102.400 - 102.865: 99.8970% ( 1) 00:32:24.405 114.036 - 114.502: 99.9085% ( 1) 00:32:24.405 114.502 - 114.967: 99.9199% ( 1) 00:32:24.405 117.295 - 117.760: 99.9314% ( 1) 00:32:24.405 119.156 - 120.087: 99.9428% ( 1) 00:32:24.405 127.535 - 128.465: 99.9542% ( 1) 00:32:24.405 133.120 - 134.051: 99.9657% ( 1) 00:32:24.405 161.047 - 161.978: 99.9771% ( 1) 00:32:24.405 169.425 - 170.356: 99.9886% ( 1) 00:32:24.405 243.898 - 245.760: 100.0000% ( 1) 00:32:24.405 00:32:24.405 ************************************ 00:32:24.405 END TEST nvme_overhead 00:32:24.405 ************************************ 00:32:24.405 00:32:24.405 real 0m1.327s 00:32:24.405 user 0m1.120s 00:32:24.405 sys 0m0.132s 00:32:24.405 00:51:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:24.405 00:51:57 -- common/autotest_common.sh@10 -- # set +x 00:32:24.405 00:51:57 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:24.405 00:51:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:32:24.405 00:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:24.405 00:51:57 -- common/autotest_common.sh@10 -- # set +x 00:32:24.405 ************************************ 00:32:24.405 START TEST nvme_arbitration 00:32:24.405 ************************************ 00:32:24.405 00:51:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:32:27.715 Initializing NVMe Controllers 00:32:27.715 Attached to 0000:00:10.0 00:32:27.715 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:32:27.715 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:32:27.715 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:32:27.715 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:32:27.715 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:32:27.715 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:32:27.715 Initialization complete. Launching workers. 00:32:27.715 Starting thread on core 1 with urgent priority queue 00:32:27.715 Starting thread on core 2 with urgent priority queue 00:32:27.715 Starting thread on core 0 with urgent priority queue 00:32:27.715 Starting thread on core 3 with urgent priority queue 00:32:27.715 QEMU NVMe Ctrl (12340 ) core 0: 1322.67 IO/s 75.60 secs/100000 ios 00:32:27.715 QEMU NVMe Ctrl (12340 ) core 1: 1130.67 IO/s 88.44 secs/100000 ios 00:32:27.715 QEMU NVMe Ctrl (12340 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:32:27.715 QEMU NVMe Ctrl (12340 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:32:27.715 ======================================================== 00:32:27.715 00:32:27.715 ************************************ 00:32:27.715 END TEST nvme_arbitration 00:32:27.715 ************************************ 00:32:27.715 00:32:27.715 real 0m3.458s 00:32:27.715 user 0m9.409s 00:32:27.715 sys 0m0.128s 00:32:27.715 00:52:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:27.715 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:32:27.973 00:52:01 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:32:27.973 00:52:01 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:32:27.973 00:52:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:27.973 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:32:27.973 ************************************ 00:32:27.973 START TEST nvme_single_aen 00:32:27.973 ************************************ 00:32:27.973 00:52:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:32:28.232 Asynchronous Event Request test 00:32:28.232 Attached to 0000:00:10.0 00:32:28.232 Reset controller to setup AER completions for this process 00:32:28.232 Registering asynchronous event callbacks... 00:32:28.232 Getting orig temperature thresholds of all controllers 00:32:28.232 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:28.232 Setting all controllers temperature threshold low to trigger AER 00:32:28.232 Waiting for all controllers temperature threshold to be set lower 00:32:28.232 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:28.232 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:32:28.232 Waiting for all controllers to trigger AER and reset threshold 00:32:28.232 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:28.232 Cleaning up... 00:32:28.232 ************************************ 00:32:28.232 END TEST nvme_single_aen 00:32:28.232 ************************************ 00:32:28.232 00:32:28.232 real 0m0.278s 00:32:28.232 user 0m0.100s 00:32:28.232 sys 0m0.099s 00:32:28.232 00:52:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:28.232 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:32:28.232 00:52:01 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:32:28.232 00:52:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:28.232 00:52:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:28.232 00:52:01 -- common/autotest_common.sh@10 -- # set +x 00:32:28.232 ************************************ 00:32:28.232 START TEST nvme_doorbell_aers 00:32:28.232 ************************************ 00:32:28.232 00:52:01 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:32:28.232 00:52:01 -- nvme/nvme.sh@70 -- # bdfs=() 00:32:28.232 00:52:01 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:32:28.232 00:52:01 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:32:28.232 00:52:01 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:32:28.232 00:52:01 -- common/autotest_common.sh@1499 -- # bdfs=() 00:32:28.232 00:52:01 -- common/autotest_common.sh@1499 -- # local bdfs 00:32:28.232 00:52:01 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:28.232 00:52:01 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:28.232 00:52:01 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:32:28.232 00:52:01 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:32:28.232 00:52:01 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:32:28.232 00:52:01 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:32:28.232 00:52:01 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:28.799 [2024-04-27 00:52:02.088354] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147765) is not found. Dropping the request. 00:32:38.770 Executing: test_write_invalid_db 00:32:38.770 Waiting for AER completion... 00:32:38.770 Failure: test_write_invalid_db 00:32:38.770 00:32:38.770 Executing: test_invalid_db_write_overflow_sq 00:32:38.770 Waiting for AER completion... 00:32:38.770 Failure: test_invalid_db_write_overflow_sq 00:32:38.770 00:32:38.770 Executing: test_invalid_db_write_overflow_cq 00:32:38.770 Waiting for AER completion... 00:32:38.770 Failure: test_invalid_db_write_overflow_cq 00:32:38.770 00:32:38.770 ************************************ 00:32:38.770 END TEST nvme_doorbell_aers 00:32:38.770 ************************************ 00:32:38.770 00:32:38.770 real 0m10.111s 00:32:38.770 user 0m8.713s 00:32:38.770 sys 0m1.343s 00:32:38.770 00:52:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:38.770 00:52:11 -- common/autotest_common.sh@10 -- # set +x 00:32:38.770 00:52:11 -- nvme/nvme.sh@97 -- # uname 00:32:38.770 00:52:11 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:32:38.770 00:52:11 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:32:38.770 00:52:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:32:38.770 00:52:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:38.770 00:52:11 -- common/autotest_common.sh@10 -- # set +x 00:32:38.770 ************************************ 00:32:38.770 START TEST nvme_multi_aen 00:32:38.770 ************************************ 00:32:38.770 00:52:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:32:38.770 [2024-04-27 00:52:12.199940] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147765) is not found. Dropping the request. 00:32:38.770 [2024-04-27 00:52:12.200290] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147765) is not found. Dropping the request. 00:32:38.770 [2024-04-27 00:52:12.200452] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147765) is not found. Dropping the request. 00:32:38.770 Child process pid: 147960 00:32:39.028 [Child] Asynchronous Event Request test 00:32:39.028 [Child] Attached to 0000:00:10.0 00:32:39.028 [Child] Registering asynchronous event callbacks... 00:32:39.028 [Child] Getting orig temperature thresholds of all controllers 00:32:39.028 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:39.028 [Child] Waiting for all controllers to trigger AER and reset threshold 00:32:39.028 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:39.028 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:39.028 [Child] Cleaning up... 00:32:39.028 Asynchronous Event Request test 00:32:39.028 Attached to 0000:00:10.0 00:32:39.028 Reset controller to setup AER completions for this process 00:32:39.028 Registering asynchronous event callbacks... 00:32:39.028 Getting orig temperature thresholds of all controllers 00:32:39.028 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:32:39.028 Setting all controllers temperature threshold low to trigger AER 00:32:39.028 Waiting for all controllers temperature threshold to be set lower 00:32:39.028 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:32:39.028 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:32:39.028 Waiting for all controllers to trigger AER and reset threshold 00:32:39.028 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:32:39.028 Cleaning up... 00:32:39.028 ************************************ 00:32:39.028 END TEST nvme_multi_aen 00:32:39.028 ************************************ 00:32:39.028 00:32:39.028 real 0m0.655s 00:32:39.028 user 0m0.243s 00:32:39.028 sys 0m0.234s 00:32:39.028 00:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:39.028 00:52:12 -- common/autotest_common.sh@10 -- # set +x 00:32:39.286 00:52:12 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:39.286 00:52:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:32:39.286 00:52:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:39.286 00:52:12 -- common/autotest_common.sh@10 -- # set +x 00:32:39.286 ************************************ 00:32:39.286 START TEST nvme_startup 00:32:39.286 ************************************ 00:32:39.286 00:52:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:32:39.543 Initializing NVMe Controllers 00:32:39.543 Attached to 0000:00:10.0 00:32:39.543 Initialization complete. 00:32:39.543 Time used:202333.078 (us). 00:32:39.543 ************************************ 00:32:39.543 END TEST nvme_startup 00:32:39.543 ************************************ 00:32:39.543 00:32:39.543 real 0m0.303s 00:32:39.543 user 0m0.118s 00:32:39.543 sys 0m0.119s 00:32:39.543 00:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:39.543 00:52:12 -- common/autotest_common.sh@10 -- # set +x 00:32:39.543 00:52:13 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:32:39.543 00:52:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:39.543 00:52:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:39.543 00:52:13 -- common/autotest_common.sh@10 -- # set +x 00:32:39.543 ************************************ 00:32:39.543 START TEST nvme_multi_secondary 00:32:39.543 ************************************ 00:32:39.543 00:52:13 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:32:39.543 00:52:13 -- nvme/nvme.sh@52 -- # pid0=148040 00:32:39.543 00:52:13 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:32:39.543 00:52:13 -- nvme/nvme.sh@54 -- # pid1=148041 00:32:39.543 00:52:13 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:39.543 00:52:13 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:32:43.726 Initializing NVMe Controllers 00:32:43.726 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:43.726 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:32:43.726 Initialization complete. Launching workers. 00:32:43.726 ======================================================== 00:32:43.726 Latency(us) 00:32:43.726 Device Information : IOPS MiB/s Average min max 00:32:43.726 PCIE (0000:00:10.0) NSID 1 from core 1: 35703.65 139.47 447.78 111.14 2617.15 00:32:43.726 ======================================================== 00:32:43.726 Total : 35703.65 139.47 447.78 111.14 2617.15 00:32:43.726 00:32:43.726 Initializing NVMe Controllers 00:32:43.726 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:43.726 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:32:43.726 Initialization complete. Launching workers. 00:32:43.726 ======================================================== 00:32:43.726 Latency(us) 00:32:43.726 Device Information : IOPS MiB/s Average min max 00:32:43.726 PCIE (0000:00:10.0) NSID 1 from core 2: 15119.96 59.06 1056.83 144.66 20698.84 00:32:43.726 ======================================================== 00:32:43.726 Total : 15119.96 59.06 1056.83 144.66 20698.84 00:32:43.726 00:32:43.726 00:52:16 -- nvme/nvme.sh@56 -- # wait 148040 00:32:45.104 Initializing NVMe Controllers 00:32:45.104 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:45.104 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:45.104 Initialization complete. Launching workers. 00:32:45.104 ======================================================== 00:32:45.104 Latency(us) 00:32:45.104 Device Information : IOPS MiB/s Average min max 00:32:45.104 PCIE (0000:00:10.0) NSID 1 from core 0: 45244.39 176.74 353.30 113.22 15592.37 00:32:45.104 ======================================================== 00:32:45.104 Total : 45244.39 176.74 353.30 113.22 15592.37 00:32:45.104 00:32:45.104 00:52:18 -- nvme/nvme.sh@57 -- # wait 148041 00:32:45.104 00:52:18 -- nvme/nvme.sh@61 -- # pid0=148116 00:32:45.104 00:52:18 -- nvme/nvme.sh@63 -- # pid1=148117 00:32:45.104 00:52:18 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:32:45.104 00:52:18 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:32:45.104 00:52:18 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:32:48.413 Initializing NVMe Controllers 00:32:48.413 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:48.413 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:32:48.413 Initialization complete. Launching workers. 00:32:48.413 ======================================================== 00:32:48.413 Latency(us) 00:32:48.413 Device Information : IOPS MiB/s Average min max 00:32:48.413 PCIE (0000:00:10.0) NSID 1 from core 0: 35484.49 138.61 450.51 115.25 1566.79 00:32:48.413 ======================================================== 00:32:48.413 Total : 35484.49 138.61 450.51 115.25 1566.79 00:32:48.413 00:32:48.413 Initializing NVMe Controllers 00:32:48.413 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:48.413 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:32:48.413 Initialization complete. Launching workers. 00:32:48.413 ======================================================== 00:32:48.413 Latency(us) 00:32:48.413 Device Information : IOPS MiB/s Average min max 00:32:48.413 PCIE (0000:00:10.0) NSID 1 from core 1: 34413.56 134.43 464.52 143.60 1554.03 00:32:48.413 ======================================================== 00:32:48.413 Total : 34413.56 134.43 464.52 143.60 1554.03 00:32:48.413 00:32:50.936 Initializing NVMe Controllers 00:32:50.936 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:32:50.936 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:32:50.936 Initialization complete. Launching workers. 00:32:50.936 ======================================================== 00:32:50.936 Latency(us) 00:32:50.936 Device Information : IOPS MiB/s Average min max 00:32:50.936 PCIE (0000:00:10.0) NSID 1 from core 2: 19107.56 74.64 835.96 125.08 28752.26 00:32:50.936 ======================================================== 00:32:50.936 Total : 19107.56 74.64 835.96 125.08 28752.26 00:32:50.936 00:32:50.936 ************************************ 00:32:50.936 END TEST nvme_multi_secondary 00:32:50.936 ************************************ 00:32:50.936 00:52:24 -- nvme/nvme.sh@65 -- # wait 148116 00:32:50.936 00:52:24 -- nvme/nvme.sh@66 -- # wait 148117 00:32:50.936 00:32:50.936 real 0m10.964s 00:32:50.936 user 0m18.618s 00:32:50.936 sys 0m0.831s 00:32:50.936 00:52:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:50.936 00:52:24 -- common/autotest_common.sh@10 -- # set +x 00:32:50.936 00:52:24 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:32:50.936 00:52:24 -- nvme/nvme.sh@102 -- # kill_stub 00:32:50.936 00:52:24 -- common/autotest_common.sh@1075 -- # [[ -e /proc/147261 ]] 00:32:50.936 00:52:24 -- common/autotest_common.sh@1076 -- # kill 147261 00:32:50.936 00:52:24 -- common/autotest_common.sh@1077 -- # wait 147261 00:32:50.936 [2024-04-27 00:52:24.085848] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147955) is not found. Dropping the request. 00:32:50.936 [2024-04-27 00:52:24.085987] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147955) is not found. Dropping the request. 00:32:50.936 [2024-04-27 00:52:24.086048] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147955) is not found. Dropping the request. 00:32:50.936 [2024-04-27 00:52:24.086103] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 147955) is not found. Dropping the request. 00:32:50.936 00:52:24 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:32:50.936 00:52:24 -- common/autotest_common.sh@1083 -- # echo 2 00:32:50.936 00:52:24 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:50.936 00:52:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:50.936 00:52:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:50.936 00:52:24 -- common/autotest_common.sh@10 -- # set +x 00:32:50.936 ************************************ 00:32:50.936 START TEST bdev_nvme_reset_stuck_adm_cmd 00:32:50.936 ************************************ 00:32:50.936 00:52:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:32:50.936 * Looking for test storage... 00:32:50.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:32:50.937 00:52:24 -- common/autotest_common.sh@1510 -- # bdfs=() 00:32:50.937 00:52:24 -- common/autotest_common.sh@1510 -- # local bdfs 00:32:50.937 00:52:24 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:32:50.937 00:52:24 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:32:50.937 00:52:24 -- common/autotest_common.sh@1499 -- # bdfs=() 00:32:50.937 00:52:24 -- common/autotest_common.sh@1499 -- # local bdfs 00:32:50.937 00:52:24 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:50.937 00:52:24 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:50.937 00:52:24 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:32:50.937 00:52:24 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:32:50.937 00:52:24 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:32:50.937 00:52:24 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=148268 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:50.937 00:52:24 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 148268 00:32:50.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.937 00:52:24 -- common/autotest_common.sh@817 -- # '[' -z 148268 ']' 00:32:50.937 00:52:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.937 00:52:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:50.937 00:52:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.937 00:52:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:50.937 00:52:24 -- common/autotest_common.sh@10 -- # set +x 00:32:51.194 [2024-04-27 00:52:24.576021] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:32:51.195 [2024-04-27 00:52:24.576280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148268 ] 00:32:51.195 [2024-04-27 00:52:24.779625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:51.452 [2024-04-27 00:52:25.036905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.452 [2024-04-27 00:52:25.037027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.452 [2024-04-27 00:52:25.037327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.452 [2024-04-27 00:52:25.037331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.386 00:52:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:52.386 00:52:25 -- common/autotest_common.sh@850 -- # return 0 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:32:52.386 00:52:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.386 00:52:25 -- common/autotest_common.sh@10 -- # set +x 00:32:52.386 nvme0n1 00:32:52.386 00:52:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_paEqZ.txt 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:32:52.386 00:52:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.386 00:52:25 -- common/autotest_common.sh@10 -- # set +x 00:32:52.386 true 00:32:52.386 00:52:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1714179145 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=148296 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:32:52.386 00:52:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:32:54.913 00:52:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.913 00:52:27 -- common/autotest_common.sh@10 -- # set +x 00:32:54.913 [2024-04-27 00:52:27.892781] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:32:54.913 [2024-04-27 00:52:27.893279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.913 [2024-04-27 00:52:27.893496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:32:54.913 [2024-04-27 00:52:27.893708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.913 [2024-04-27 00:52:27.895613] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:54.913 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 148296 00:32:54.913 00:52:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 148296 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 148296 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.913 00:52:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.913 00:52:27 -- common/autotest_common.sh@10 -- # set +x 00:32:54.913 00:52:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_paEqZ.txt 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:32:54.913 00:52:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:32:54.913 00:52:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:32:54.913 00:52:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:32:54.913 00:52:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:32:54.913 00:52:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_paEqZ.txt 00:32:54.913 00:52:28 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 148268 00:32:54.913 00:52:28 -- common/autotest_common.sh@936 -- # '[' -z 148268 ']' 00:32:54.913 00:52:28 -- common/autotest_common.sh@940 -- # kill -0 148268 00:32:54.913 00:52:28 -- common/autotest_common.sh@941 -- # uname 00:32:54.913 00:52:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:54.913 00:52:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148268 00:32:54.913 00:52:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:54.913 killing process with pid 148268 00:32:54.913 00:52:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:54.913 00:52:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148268' 00:32:54.913 00:52:28 -- common/autotest_common.sh@955 -- # kill 148268 00:32:54.913 00:52:28 -- common/autotest_common.sh@960 -- # wait 148268 00:32:56.813 00:52:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:32:56.813 00:52:30 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:32:56.813 00:32:56.813 real 0m5.717s 00:32:56.813 user 0m19.763s 00:32:56.813 sys 0m0.643s 00:32:56.813 00:52:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:56.813 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:32:56.813 ************************************ 00:32:56.813 END TEST bdev_nvme_reset_stuck_adm_cmd 00:32:56.813 ************************************ 00:32:56.813 00:52:30 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:32:56.813 00:52:30 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:32:56.813 00:52:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:56.813 00:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:56.813 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:32:56.813 ************************************ 00:32:56.813 START TEST nvme_fio 00:32:56.813 ************************************ 00:32:56.813 00:52:30 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:32:56.813 00:52:30 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:32:56.813 00:52:30 -- nvme/nvme.sh@32 -- # ran_fio=false 00:32:56.813 00:52:30 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:32:56.813 00:52:30 -- common/autotest_common.sh@1499 -- # bdfs=() 00:32:56.813 00:52:30 -- common/autotest_common.sh@1499 -- # local bdfs 00:32:56.813 00:52:30 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:56.813 00:52:30 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:56.813 00:52:30 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:32:56.813 00:52:30 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:32:56.813 00:52:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:32:56.813 00:52:30 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:32:56.813 00:52:30 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:32:56.813 00:52:30 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:32:56.813 00:52:30 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:56.813 00:52:30 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:32:57.071 00:52:30 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:32:57.071 00:52:30 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:32:57.328 00:52:30 -- nvme/nvme.sh@41 -- # bs=4096 00:32:57.328 00:52:30 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:32:57.328 00:52:30 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:32:57.328 00:52:30 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:57.328 00:52:30 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:57.328 00:52:30 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:57.328 00:52:30 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:57.328 00:52:30 -- common/autotest_common.sh@1327 -- # shift 00:32:57.328 00:52:30 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:57.328 00:52:30 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.328 00:52:30 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:32:57.328 00:52:30 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:57.328 00:52:30 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:57.328 00:52:30 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:32:57.328 00:52:30 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:32:57.328 00:52:30 -- common/autotest_common.sh@1333 -- # break 00:32:57.328 00:52:30 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:32:57.329 00:52:30 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:32:57.329 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:57.329 fio-3.35 00:32:57.329 Starting 1 thread 00:33:00.606 00:33:00.606 test: (groupid=0, jobs=1): err= 0: pid=148449: Sat Apr 27 00:52:34 2024 00:33:00.606 read: IOPS=18.5k, BW=72.2MiB/s (75.7MB/s)(145MiB/2001msec) 00:33:00.606 slat (nsec): min=3899, max=78515, avg=5486.42, stdev=1613.09 00:33:00.606 clat (usec): min=324, max=9251, avg=3440.04, stdev=372.78 00:33:00.606 lat (usec): min=329, max=9327, avg=3445.52, stdev=373.23 00:33:00.606 clat percentiles (usec): 00:33:00.606 | 1.00th=[ 3032], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3228], 00:33:00.606 | 30.00th=[ 3261], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3392], 00:33:00.606 | 70.00th=[ 3458], 80.00th=[ 3556], 90.00th=[ 3884], 95.00th=[ 4113], 00:33:00.606 | 99.00th=[ 4424], 99.50th=[ 4883], 99.90th=[ 7635], 99.95th=[ 7963], 00:33:00.606 | 99.99th=[ 9110] 00:33:00.606 bw ( KiB/s): min=68160, max=76856, per=99.40%, avg=73530.67, stdev=4694.96, samples=3 00:33:00.606 iops : min=17040, max=19214, avg=18382.67, stdev=1173.74, samples=3 00:33:00.606 write: IOPS=18.5k, BW=72.3MiB/s (75.8MB/s)(145MiB/2001msec); 0 zone resets 00:33:00.606 slat (nsec): min=4022, max=58869, avg=5591.29, stdev=1591.60 00:33:00.606 clat (usec): min=253, max=9148, avg=3453.28, stdev=381.88 00:33:00.606 lat (usec): min=258, max=9172, avg=3458.87, stdev=382.32 00:33:00.606 clat percentiles (usec): 00:33:00.606 | 1.00th=[ 3032], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3228], 00:33:00.606 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3425], 00:33:00.606 | 70.00th=[ 3458], 80.00th=[ 3556], 90.00th=[ 3916], 95.00th=[ 4146], 00:33:00.606 | 99.00th=[ 4424], 99.50th=[ 4948], 99.90th=[ 7570], 99.95th=[ 8029], 00:33:00.606 | 99.99th=[ 8979] 00:33:00.606 bw ( KiB/s): min=68352, max=76256, per=99.22%, avg=73440.00, stdev=4414.72, samples=3 00:33:00.606 iops : min=17088, max=19064, avg=18360.67, stdev=1104.20, samples=3 00:33:00.606 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:33:00.606 lat (msec) : 2=0.05%, 4=91.84%, 10=8.07% 00:33:00.606 cpu : usr=99.90%, sys=0.00%, ctx=20, majf=0, minf=35 00:33:00.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:00.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:00.606 issued rwts: total=37005,37029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:00.606 00:33:00.606 Run status group 0 (all jobs): 00:33:00.606 READ: bw=72.2MiB/s (75.7MB/s), 72.2MiB/s-72.2MiB/s (75.7MB/s-75.7MB/s), io=145MiB (152MB), run=2001-2001msec 00:33:00.606 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=145MiB (152MB), run=2001-2001msec 00:33:00.863 ----------------------------------------------------- 00:33:00.863 Suppressions used: 00:33:00.863 count bytes template 00:33:00.863 1 32 /usr/src/fio/parse.c 00:33:00.863 ----------------------------------------------------- 00:33:00.863 00:33:00.863 00:52:34 -- nvme/nvme.sh@44 -- # ran_fio=true 00:33:00.864 00:52:34 -- nvme/nvme.sh@46 -- # true 00:33:00.864 00:33:00.864 real 0m4.202s 00:33:00.864 user 0m3.452s 00:33:00.864 sys 0m0.428s 00:33:00.864 00:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:00.864 00:52:34 -- common/autotest_common.sh@10 -- # set +x 00:33:00.864 ************************************ 00:33:00.864 END TEST nvme_fio 00:33:00.864 ************************************ 00:33:00.864 00:33:00.864 real 0m49.178s 00:33:00.864 user 2m8.689s 00:33:00.864 sys 0m9.367s 00:33:00.864 00:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:00.864 ************************************ 00:33:00.864 00:52:34 -- common/autotest_common.sh@10 -- # set +x 00:33:00.864 END TEST nvme 00:33:00.864 ************************************ 00:33:00.864 00:52:34 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:33:00.864 00:52:34 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:33:00.864 00:52:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:00.864 00:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:00.864 00:52:34 -- common/autotest_common.sh@10 -- # set +x 00:33:01.121 ************************************ 00:33:01.121 START TEST nvme_scc 00:33:01.121 ************************************ 00:33:01.121 00:52:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:33:01.121 * Looking for test storage... 00:33:01.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:01.121 00:52:34 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:33:01.121 00:52:34 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:33:01.121 00:52:34 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:33:01.121 00:52:34 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:01.121 00:52:34 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:01.121 00:52:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.121 00:52:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.121 00:52:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.121 00:52:34 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:01.121 00:52:34 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:01.121 00:52:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:01.121 00:52:34 -- paths/export.sh@5 -- # export PATH 00:33:01.121 00:52:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:01.121 00:52:34 -- nvme/functions.sh@10 -- # ctrls=() 00:33:01.121 00:52:34 -- nvme/functions.sh@10 -- # declare -A ctrls 00:33:01.121 00:52:34 -- nvme/functions.sh@11 -- # nvmes=() 00:33:01.121 00:52:34 -- nvme/functions.sh@11 -- # declare -A nvmes 00:33:01.121 00:52:34 -- nvme/functions.sh@12 -- # bdfs=() 00:33:01.121 00:52:34 -- nvme/functions.sh@12 -- # declare -A bdfs 00:33:01.121 00:52:34 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:33:01.121 00:52:34 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:33:01.121 00:52:34 -- nvme/functions.sh@14 -- # nvme_name= 00:33:01.121 00:52:34 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:01.121 00:52:34 -- nvme/nvme_scc.sh@12 -- # uname 00:33:01.121 00:52:34 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:33:01.121 00:52:34 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:33:01.121 00:52:34 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:01.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:01.379 Waiting for block devices as requested 00:33:01.379 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:01.641 00:52:35 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:33:01.641 00:52:35 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:33:01.641 00:52:35 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:33:01.641 00:52:35 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:33:01.641 00:52:35 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:33:01.641 00:52:35 -- scripts/common.sh@15 -- # local i 00:33:01.641 00:52:35 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:33:01.641 00:52:35 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:33:01.641 00:52:35 -- scripts/common.sh@24 -- # return 0 00:33:01.641 00:52:35 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:33:01.641 00:52:35 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:33:01.641 00:52:35 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@18 -- # shift 00:33:01.641 00:52:35 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.641 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.641 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:33:01.641 00:52:35 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:33:01.642 00:52:35 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.642 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.642 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.643 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.643 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.643 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:33:01.644 00:52:35 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:33:01.644 00:52:35 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:33:01.644 00:52:35 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:33:01.644 00:52:35 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@18 -- # shift 00:33:01.644 00:52:35 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.644 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:33:01.644 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.644 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:33:01.645 00:52:35 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # IFS=: 00:33:01.645 00:52:35 -- nvme/functions.sh@21 -- # read -r reg val 00:33:01.645 00:52:35 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:33:01.645 00:52:35 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:33:01.645 00:52:35 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:33:01.645 00:52:35 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:33:01.645 00:52:35 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:33:01.645 00:52:35 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:33:01.645 00:52:35 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:33:01.645 00:52:35 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:33:01.645 00:52:35 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:33:01.645 00:52:35 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:33:01.645 00:52:35 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:33:01.645 00:52:35 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:33:01.645 00:52:35 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:33:01.645 00:52:35 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:33:01.645 00:52:35 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:33:01.645 00:52:35 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:33:01.645 00:52:35 -- nvme/functions.sh@76 -- # echo 0x15d 00:33:01.645 00:52:35 -- nvme/functions.sh@184 -- # oncs=0x15d 00:33:01.645 00:52:35 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:33:01.645 00:52:35 -- nvme/functions.sh@197 -- # echo nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:33:01.645 00:52:35 -- nvme/functions.sh@206 -- # echo nvme0 00:33:01.645 00:52:35 -- nvme/functions.sh@207 -- # return 0 00:33:01.646 00:52:35 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:33:01.646 00:52:35 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:33:01.646 00:52:35 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:02.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:33:02.213 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:03.148 00:52:36 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:33:03.148 00:52:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:33:03.148 00:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:03.148 00:52:36 -- common/autotest_common.sh@10 -- # set +x 00:33:03.407 ************************************ 00:33:03.407 START TEST nvme_simple_copy 00:33:03.407 ************************************ 00:33:03.407 00:52:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:33:03.665 Initializing NVMe Controllers 00:33:03.665 Attaching to 0000:00:10.0 00:33:03.665 Controller supports SCC. Attached to 0000:00:10.0 00:33:03.665 Namespace ID: 1 size: 5GB 00:33:03.665 Initialization complete. 00:33:03.665 00:33:03.665 Controller QEMU NVMe Ctrl (12340 ) 00:33:03.665 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:33:03.665 Namespace Block Size:4096 00:33:03.665 Writing LBAs 0 to 63 with Random Data 00:33:03.665 Copied LBAs from 0 - 63 to the Destination LBA 256 00:33:03.665 LBAs matching Written Data: 64 00:33:03.665 00:33:03.665 real 0m0.309s 00:33:03.665 user 0m0.134s 00:33:03.665 sys 0m0.077s 00:33:03.665 00:52:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:03.665 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:33:03.665 ************************************ 00:33:03.665 END TEST nvme_simple_copy 00:33:03.665 ************************************ 00:33:03.665 00:33:03.665 real 0m2.618s 00:33:03.665 user 0m0.815s 00:33:03.665 sys 0m1.699s 00:33:03.665 00:52:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:03.665 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:33:03.665 ************************************ 00:33:03.665 END TEST nvme_scc 00:33:03.665 ************************************ 00:33:03.665 00:52:37 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:33:03.665 00:52:37 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:33:03.665 00:52:37 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:33:03.665 00:52:37 -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]] 00:33:03.665 00:52:37 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:33:03.665 00:52:37 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:33:03.665 00:52:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:03.665 00:52:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:03.665 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:33:03.665 ************************************ 00:33:03.665 START TEST nvme_rpc 00:33:03.665 ************************************ 00:33:03.665 00:52:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:33:03.923 * Looking for test storage... 00:33:03.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:03.923 00:52:37 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.923 00:52:37 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:33:03.923 00:52:37 -- common/autotest_common.sh@1510 -- # bdfs=() 00:33:03.923 00:52:37 -- common/autotest_common.sh@1510 -- # local bdfs 00:33:03.923 00:52:37 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:33:03.923 00:52:37 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:33:03.923 00:52:37 -- common/autotest_common.sh@1499 -- # bdfs=() 00:33:03.923 00:52:37 -- common/autotest_common.sh@1499 -- # local bdfs 00:33:03.923 00:52:37 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:03.923 00:52:37 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:33:03.923 00:52:37 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:33:03.924 00:52:37 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:33:03.924 00:52:37 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 00:33:03.924 00:52:37 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:33:03.924 00:52:37 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:33:03.924 00:52:37 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=148950 00:33:03.924 00:52:37 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:33:03.924 00:52:37 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:33:03.924 00:52:37 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 148950 00:33:03.924 00:52:37 -- common/autotest_common.sh@817 -- # '[' -z 148950 ']' 00:33:03.924 00:52:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.924 00:52:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:03.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.924 00:52:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.924 00:52:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:03.924 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:33:03.924 [2024-04-27 00:52:37.395528] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:03.924 [2024-04-27 00:52:37.395736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148950 ] 00:33:04.180 [2024-04-27 00:52:37.570713] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:04.438 [2024-04-27 00:52:37.809639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.438 [2024-04-27 00:52:37.809651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.043 00:52:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:05.043 00:52:38 -- common/autotest_common.sh@850 -- # return 0 00:33:05.043 00:52:38 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:33:05.306 Nvme0n1 00:33:05.306 00:52:38 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:33:05.306 00:52:38 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:33:05.564 request: 00:33:05.564 { 00:33:05.564 "filename": "non_existing_file", 00:33:05.564 "bdev_name": "Nvme0n1", 00:33:05.564 "method": "bdev_nvme_apply_firmware", 00:33:05.564 "req_id": 1 00:33:05.564 } 00:33:05.564 Got JSON-RPC error response 00:33:05.564 response: 00:33:05.564 { 00:33:05.564 "code": -32603, 00:33:05.564 "message": "open file failed." 00:33:05.564 } 00:33:05.564 00:52:39 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:33:05.564 00:52:39 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:33:05.564 00:52:39 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:05.821 00:52:39 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:33:05.821 00:52:39 -- nvme/nvme_rpc.sh@40 -- # killprocess 148950 00:33:05.822 00:52:39 -- common/autotest_common.sh@936 -- # '[' -z 148950 ']' 00:33:05.822 00:52:39 -- common/autotest_common.sh@940 -- # kill -0 148950 00:33:05.822 00:52:39 -- common/autotest_common.sh@941 -- # uname 00:33:05.822 00:52:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:05.822 00:52:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 148950 00:33:06.079 00:52:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:06.079 00:52:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:06.079 00:52:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 148950' 00:33:06.079 killing process with pid 148950 00:33:06.079 00:52:39 -- common/autotest_common.sh@955 -- # kill 148950 00:33:06.079 00:52:39 -- common/autotest_common.sh@960 -- # wait 148950 00:33:07.978 00:33:07.978 real 0m4.136s 00:33:07.978 user 0m7.892s 00:33:07.978 sys 0m0.656s 00:33:07.978 00:52:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:07.978 00:52:41 -- common/autotest_common.sh@10 -- # set +x 00:33:07.978 ************************************ 00:33:07.978 END TEST nvme_rpc 00:33:07.978 ************************************ 00:33:07.978 00:52:41 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:33:07.978 00:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:07.978 00:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:07.978 00:52:41 -- common/autotest_common.sh@10 -- # set +x 00:33:07.978 ************************************ 00:33:07.978 START TEST nvme_rpc_timeouts 00:33:07.978 ************************************ 00:33:07.978 00:52:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:33:07.978 * Looking for test storage... 00:33:07.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:07.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:33:07.978 00:52:41 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:07.978 00:52:41 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_149035 00:33:07.978 00:52:41 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_149035 00:33:07.978 00:52:41 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=149063 00:33:07.978 00:52:41 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:33:07.978 00:52:41 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 149063 00:33:07.978 00:52:41 -- common/autotest_common.sh@817 -- # '[' -z 149063 ']' 00:33:07.978 00:52:41 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:33:07.978 00:52:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:07.978 00:52:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:07.978 00:52:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:07.978 00:52:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:07.978 00:52:41 -- common/autotest_common.sh@10 -- # set +x 00:33:08.236 [2024-04-27 00:52:41.575291] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:08.236 [2024-04-27 00:52:41.576083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149063 ] 00:33:08.236 [2024-04-27 00:52:41.742830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:08.494 [2024-04-27 00:52:41.930845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.494 [2024-04-27 00:52:41.930850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.428 Checking default timeout settings: 00:33:09.428 00:52:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:09.428 00:52:42 -- common/autotest_common.sh@850 -- # return 0 00:33:09.428 00:52:42 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:33:09.428 00:52:42 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:09.687 Making settings changes with rpc: 00:33:09.687 00:52:43 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:33:09.687 00:52:43 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:33:09.944 Check default vs. modified settings: 00:33:09.944 00:52:43 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:33:09.944 00:52:43 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_149035 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_149035 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:10.203 Setting action_on_timeout is changed as expected. 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_149035 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_149035 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:10.203 Setting timeout_us is changed as expected. 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_149035 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_149035 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:33:10.203 Setting timeout_admin_us is changed as expected. 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_149035 /tmp/settings_modified_149035 00:33:10.203 00:52:43 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 149063 00:33:10.203 00:52:43 -- common/autotest_common.sh@936 -- # '[' -z 149063 ']' 00:33:10.203 00:52:43 -- common/autotest_common.sh@940 -- # kill -0 149063 00:33:10.203 00:52:43 -- common/autotest_common.sh@941 -- # uname 00:33:10.203 00:52:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:10.203 00:52:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149063 00:33:10.203 killing process with pid 149063 00:33:10.203 00:52:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:10.203 00:52:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:10.203 00:52:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149063' 00:33:10.203 00:52:43 -- common/autotest_common.sh@955 -- # kill 149063 00:33:10.203 00:52:43 -- common/autotest_common.sh@960 -- # wait 149063 00:33:12.730 RPC TIMEOUT SETTING TEST PASSED. 00:33:12.730 00:52:45 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:33:12.730 00:33:12.730 real 0m4.463s 00:33:12.730 user 0m8.682s 00:33:12.730 sys 0m0.667s 00:33:12.730 ************************************ 00:33:12.730 END TEST nvme_rpc_timeouts 00:33:12.730 ************************************ 00:33:12.730 00:52:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:12.731 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:33:12.731 00:52:45 -- spdk/autotest.sh@241 -- # '[' 1 -eq 0 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@245 -- # [[ 0 -eq 1 ]] 00:33:12.731 00:52:45 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@258 -- # timing_exit lib 00:33:12.731 00:52:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:12.731 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:33:12.731 00:52:45 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:12.731 00:52:45 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:12.731 00:52:45 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:12.731 00:52:45 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:12.731 00:52:45 -- spdk/autotest.sh@373 -- # [[ 1 -eq 1 ]] 00:33:12.731 00:52:45 -- spdk/autotest.sh@374 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:12.731 00:52:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:12.731 00:52:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:12.731 00:52:45 -- common/autotest_common.sh@10 -- # set +x 00:33:12.731 ************************************ 00:33:12.731 START TEST blockdev_raid5f 00:33:12.731 ************************************ 00:33:12.731 00:52:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:33:12.731 * Looking for test storage... 00:33:12.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:33:12.731 00:52:46 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:33:12.731 00:52:46 -- bdev/nbd_common.sh@6 -- # set -e 00:33:12.731 00:52:46 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:33:12.731 00:52:46 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:12.731 00:52:46 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:33:12.731 00:52:46 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:33:12.731 00:52:46 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:33:12.731 00:52:46 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:33:12.731 00:52:46 -- bdev/blockdev.sh@20 -- # : 00:33:12.731 00:52:46 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:33:12.731 00:52:46 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:33:12.731 00:52:46 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:33:12.731 00:52:46 -- bdev/blockdev.sh@674 -- # uname -s 00:33:12.731 00:52:46 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:33:12.731 00:52:46 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:33:12.731 00:52:46 -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:33:12.731 00:52:46 -- bdev/blockdev.sh@683 -- # crypto_device= 00:33:12.731 00:52:46 -- bdev/blockdev.sh@684 -- # dek= 00:33:12.731 00:52:46 -- bdev/blockdev.sh@685 -- # env_ctx= 00:33:12.731 00:52:46 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:33:12.731 00:52:46 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:33:12.731 00:52:46 -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:33:12.731 00:52:46 -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:33:12.731 00:52:46 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:33:12.731 00:52:46 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=149215 00:33:12.731 00:52:46 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:33:12.731 00:52:46 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:33:12.731 00:52:46 -- bdev/blockdev.sh@49 -- # waitforlisten 149215 00:33:12.731 00:52:46 -- common/autotest_common.sh@817 -- # '[' -z 149215 ']' 00:33:12.731 00:52:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.731 00:52:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:12.731 00:52:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.731 00:52:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:12.731 00:52:46 -- common/autotest_common.sh@10 -- # set +x 00:33:12.731 [2024-04-27 00:52:46.154675] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:12.731 [2024-04-27 00:52:46.155113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149215 ] 00:33:12.989 [2024-04-27 00:52:46.324309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.989 [2024-04-27 00:52:46.524466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.922 00:52:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:13.922 00:52:47 -- common/autotest_common.sh@850 -- # return 0 00:33:13.922 00:52:47 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:33:13.922 00:52:47 -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:33:13.922 00:52:47 -- bdev/blockdev.sh@280 -- # rpc_cmd 00:33:13.922 00:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.922 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.922 Malloc0 00:33:13.922 Malloc1 00:33:13.922 Malloc2 00:33:13.922 00:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:13.922 00:52:47 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:33:13.922 00:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.922 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.922 00:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:13.922 00:52:47 -- bdev/blockdev.sh@740 -- # cat 00:33:13.922 00:52:47 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:33:13.922 00:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.922 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.922 00:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:13.922 00:52:47 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:33:13.922 00:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.922 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.922 00:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:13.922 00:52:47 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:33:13.922 00:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.922 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.922 00:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:13.922 00:52:47 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:33:13.922 00:52:47 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:33:13.922 00:52:47 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:33:13.922 00:52:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:13.922 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:33:13.922 00:52:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:13.922 00:52:47 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:33:13.922 00:52:47 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2d13ab63-4572-47a4-8dce-f9e778e46355"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2d13ab63-4572-47a4-8dce-f9e778e46355",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2d13ab63-4572-47a4-8dce-f9e778e46355",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8a5dc900-9a09-4564-8756-ebe8a9badb05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "68bef8aa-75e9-483b-aca8-2cd256dc39bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "71610d91-18df-422a-8d1d-53cd003a958a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:13.922 00:52:47 -- bdev/blockdev.sh@749 -- # jq -r .name 00:33:14.181 00:52:47 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:33:14.181 00:52:47 -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:33:14.181 00:52:47 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:33:14.181 00:52:47 -- bdev/blockdev.sh@754 -- # killprocess 149215 00:33:14.181 00:52:47 -- common/autotest_common.sh@936 -- # '[' -z 149215 ']' 00:33:14.181 00:52:47 -- common/autotest_common.sh@940 -- # kill -0 149215 00:33:14.181 00:52:47 -- common/autotest_common.sh@941 -- # uname 00:33:14.181 00:52:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:14.181 00:52:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149215 00:33:14.181 00:52:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:14.181 killing process with pid 149215 00:33:14.181 00:52:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:14.181 00:52:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149215' 00:33:14.181 00:52:47 -- common/autotest_common.sh@955 -- # kill 149215 00:33:14.181 00:52:47 -- common/autotest_common.sh@960 -- # wait 149215 00:33:16.711 00:52:49 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:16.711 00:52:49 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:16.711 00:52:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:33:16.711 00:52:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:16.711 00:52:49 -- common/autotest_common.sh@10 -- # set +x 00:33:16.711 ************************************ 00:33:16.711 START TEST bdev_hello_world 00:33:16.711 ************************************ 00:33:16.711 00:52:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:33:16.711 [2024-04-27 00:52:49.984019] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:16.711 [2024-04-27 00:52:49.984235] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149293 ] 00:33:16.711 [2024-04-27 00:52:50.154463] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.969 [2024-04-27 00:52:50.334879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.536 [2024-04-27 00:52:50.834927] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:33:17.536 [2024-04-27 00:52:50.835238] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:33:17.536 [2024-04-27 00:52:50.835402] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:33:17.536 [2024-04-27 00:52:50.836063] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:33:17.536 [2024-04-27 00:52:50.836332] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:33:17.536 [2024-04-27 00:52:50.836496] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:33:17.536 [2024-04-27 00:52:50.836710] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:33:17.536 00:33:17.536 [2024-04-27 00:52:50.836867] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:33:18.911 ************************************ 00:33:18.911 END TEST bdev_hello_world 00:33:18.911 ************************************ 00:33:18.911 00:33:18.911 real 0m2.312s 00:33:18.911 user 0m1.925s 00:33:18.911 sys 0m0.265s 00:33:18.911 00:52:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:18.911 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:33:18.911 00:52:52 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:33:18.911 00:52:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:18.911 00:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:18.911 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:33:18.911 ************************************ 00:33:18.912 START TEST bdev_bounds 00:33:18.912 ************************************ 00:33:18.912 00:52:52 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:33:18.912 00:52:52 -- bdev/blockdev.sh@290 -- # bdevio_pid=149348 00:33:18.912 00:52:52 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:33:18.912 00:52:52 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:18.912 Process bdevio pid: 149348 00:33:18.912 00:52:52 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 149348' 00:33:18.912 00:52:52 -- bdev/blockdev.sh@293 -- # waitforlisten 149348 00:33:18.912 00:52:52 -- common/autotest_common.sh@817 -- # '[' -z 149348 ']' 00:33:18.912 00:52:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.912 00:52:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:18.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.912 00:52:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.912 00:52:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:18.912 00:52:52 -- common/autotest_common.sh@10 -- # set +x 00:33:18.912 [2024-04-27 00:52:52.381248] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:18.912 [2024-04-27 00:52:52.381434] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149348 ] 00:33:19.175 [2024-04-27 00:52:52.564836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:19.440 [2024-04-27 00:52:52.784764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.440 [2024-04-27 00:52:52.784927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.440 [2024-04-27 00:52:52.784937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.007 00:52:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:20.007 00:52:53 -- common/autotest_common.sh@850 -- # return 0 00:33:20.007 00:52:53 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:20.007 I/O targets: 00:33:20.007 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:33:20.007 00:33:20.007 00:33:20.007 CUnit - A unit testing framework for C - Version 2.1-3 00:33:20.007 http://cunit.sourceforge.net/ 00:33:20.007 00:33:20.007 00:33:20.007 Suite: bdevio tests on: raid5f 00:33:20.007 Test: blockdev write read block ...passed 00:33:20.007 Test: blockdev write zeroes read block ...passed 00:33:20.007 Test: blockdev write zeroes read no split ...passed 00:33:20.007 Test: blockdev write zeroes read split ...passed 00:33:20.266 Test: blockdev write zeroes read split partial ...passed 00:33:20.266 Test: blockdev reset ...passed 00:33:20.266 Test: blockdev write read 8 blocks ...passed 00:33:20.266 Test: blockdev write read size > 128k ...passed 00:33:20.266 Test: blockdev write read invalid size ...passed 00:33:20.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:20.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:20.266 Test: blockdev write read max offset ...passed 00:33:20.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:20.266 Test: blockdev writev readv 8 blocks ...passed 00:33:20.266 Test: blockdev writev readv 30 x 1block ...passed 00:33:20.266 Test: blockdev writev readv block ...passed 00:33:20.266 Test: blockdev writev readv size > 128k ...passed 00:33:20.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:20.266 Test: blockdev comparev and writev ...passed 00:33:20.266 Test: blockdev nvme passthru rw ...passed 00:33:20.266 Test: blockdev nvme passthru vendor specific ...passed 00:33:20.266 Test: blockdev nvme admin passthru ...passed 00:33:20.266 Test: blockdev copy ...passed 00:33:20.266 00:33:20.266 Run Summary: Type Total Ran Passed Failed Inactive 00:33:20.266 suites 1 1 n/a 0 0 00:33:20.266 tests 23 23 23 0 0 00:33:20.266 asserts 130 130 130 0 n/a 00:33:20.266 00:33:20.266 Elapsed time = 0.508 seconds 00:33:20.266 0 00:33:20.266 00:52:53 -- bdev/blockdev.sh@295 -- # killprocess 149348 00:33:20.266 00:52:53 -- common/autotest_common.sh@936 -- # '[' -z 149348 ']' 00:33:20.266 00:52:53 -- common/autotest_common.sh@940 -- # kill -0 149348 00:33:20.266 00:52:53 -- common/autotest_common.sh@941 -- # uname 00:33:20.266 00:52:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:20.266 00:52:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149348 00:33:20.266 00:52:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:20.266 killing process with pid 149348 00:33:20.266 00:52:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:20.266 00:52:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149348' 00:33:20.266 00:52:53 -- common/autotest_common.sh@955 -- # kill 149348 00:33:20.266 00:52:53 -- common/autotest_common.sh@960 -- # wait 149348 00:33:21.641 00:52:55 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:33:21.641 00:33:21.641 real 0m2.755s 00:33:21.641 user 0m6.445s 00:33:21.641 sys 0m0.403s 00:33:21.641 00:52:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:21.641 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:33:21.641 ************************************ 00:33:21.641 END TEST bdev_bounds 00:33:21.641 ************************************ 00:33:21.641 00:52:55 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:21.641 00:52:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:33:21.641 00:52:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:21.641 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:33:21.641 ************************************ 00:33:21.641 START TEST bdev_nbd 00:33:21.641 ************************************ 00:33:21.641 00:52:55 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:33:21.641 00:52:55 -- bdev/blockdev.sh@300 -- # uname -s 00:33:21.641 00:52:55 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:33:21.641 00:52:55 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:21.641 00:52:55 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:33:21.641 00:52:55 -- bdev/blockdev.sh@304 -- # bdev_all=('raid5f') 00:33:21.641 00:52:55 -- bdev/blockdev.sh@304 -- # local bdev_all 00:33:21.641 00:52:55 -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:33:21.641 00:52:55 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:33:21.641 00:52:55 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:21.641 00:52:55 -- bdev/blockdev.sh@311 -- # local nbd_all 00:33:21.641 00:52:55 -- bdev/blockdev.sh@312 -- # bdev_num=1 00:33:21.641 00:52:55 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:33:21.641 00:52:55 -- bdev/blockdev.sh@314 -- # local nbd_list 00:33:21.641 00:52:55 -- bdev/blockdev.sh@315 -- # bdev_list=('raid5f') 00:33:21.641 00:52:55 -- bdev/blockdev.sh@315 -- # local bdev_list 00:33:21.641 00:52:55 -- bdev/blockdev.sh@318 -- # nbd_pid=149421 00:33:21.641 00:52:55 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:21.641 00:52:55 -- bdev/blockdev.sh@320 -- # waitforlisten 149421 /var/tmp/spdk-nbd.sock 00:33:21.641 00:52:55 -- common/autotest_common.sh@817 -- # '[' -z 149421 ']' 00:33:21.641 00:52:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:21.641 00:52:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:21.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:21.642 00:52:55 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:33:21.642 00:52:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:21.642 00:52:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:21.642 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:33:21.900 [2024-04-27 00:52:55.233944] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:21.900 [2024-04-27 00:52:55.234166] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.900 [2024-04-27 00:52:55.407460] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.159 [2024-04-27 00:52:55.607307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.724 00:52:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:22.724 00:52:56 -- common/autotest_common.sh@850 -- # return 0 00:33:22.724 00:52:56 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@24 -- # local i 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:22.724 00:52:56 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:33:22.982 00:52:56 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:22.983 00:52:56 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:22.983 00:52:56 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:22.983 00:52:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:22.983 00:52:56 -- common/autotest_common.sh@855 -- # local i 00:33:22.983 00:52:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:22.983 00:52:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:22.983 00:52:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:22.983 00:52:56 -- common/autotest_common.sh@859 -- # break 00:33:22.983 00:52:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:22.983 00:52:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:22.983 00:52:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:22.983 1+0 records in 00:33:22.983 1+0 records out 00:33:22.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363488 s, 11.3 MB/s 00:33:22.983 00:52:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:22.983 00:52:56 -- common/autotest_common.sh@872 -- # size=4096 00:33:22.983 00:52:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:22.983 00:52:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:22.983 00:52:56 -- common/autotest_common.sh@875 -- # return 0 00:33:22.983 00:52:56 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:22.983 00:52:56 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:33:22.983 00:52:56 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:23.241 { 00:33:23.241 "nbd_device": "/dev/nbd0", 00:33:23.241 "bdev_name": "raid5f" 00:33:23.241 } 00:33:23.241 ]' 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:23.241 { 00:33:23.241 "nbd_device": "/dev/nbd0", 00:33:23.241 "bdev_name": "raid5f" 00:33:23.241 } 00:33:23.241 ]' 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@51 -- # local i 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:23.241 00:52:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@41 -- # break 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@45 -- # return 0 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.499 00:52:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@65 -- # true 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@65 -- # count=0 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@122 -- # count=0 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@127 -- # return 0 00:33:23.757 00:52:57 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:33:23.757 00:52:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:24.015 00:52:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:24.015 00:52:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:24.015 00:52:57 -- bdev/nbd_common.sh@12 -- # local i 00:33:24.015 00:52:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:24.015 00:52:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:24.015 00:52:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:33:24.273 /dev/nbd0 00:33:24.273 00:52:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:24.273 00:52:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:24.273 00:52:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:33:24.273 00:52:57 -- common/autotest_common.sh@855 -- # local i 00:33:24.273 00:52:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:33:24.273 00:52:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:33:24.273 00:52:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:33:24.273 00:52:57 -- common/autotest_common.sh@859 -- # break 00:33:24.273 00:52:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:33:24.273 00:52:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:33:24.273 00:52:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.273 1+0 records in 00:33:24.273 1+0 records out 00:33:24.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310584 s, 13.2 MB/s 00:33:24.273 00:52:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.273 00:52:57 -- common/autotest_common.sh@872 -- # size=4096 00:33:24.273 00:52:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.273 00:52:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:33:24.273 00:52:57 -- common/autotest_common.sh@875 -- # return 0 00:33:24.273 00:52:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.273 00:52:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:24.273 00:52:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:24.273 00:52:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:24.273 00:52:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:24.531 00:52:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:24.531 { 00:33:24.531 "nbd_device": "/dev/nbd0", 00:33:24.531 "bdev_name": "raid5f" 00:33:24.531 } 00:33:24.531 ]' 00:33:24.531 00:52:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:24.531 { 00:33:24.531 "nbd_device": "/dev/nbd0", 00:33:24.531 "bdev_name": "raid5f" 00:33:24.531 } 00:33:24.531 ]' 00:33:24.531 00:52:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:24.531 00:52:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:33:24.531 00:52:57 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:33:24.531 00:52:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@65 -- # count=1 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@66 -- # echo 1 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@95 -- # count=1 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:24.531 256+0 records in 00:33:24.531 256+0 records out 00:33:24.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106026 s, 98.9 MB/s 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:24.531 256+0 records in 00:33:24.531 256+0 records out 00:33:24.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0417322 s, 25.1 MB/s 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@51 -- # local i 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:24.531 00:52:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@41 -- # break 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@45 -- # return 0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@65 -- # true 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@65 -- # count=0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@104 -- # count=0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@109 -- # return 0 00:33:25.097 00:52:58 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:25.097 00:52:58 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:25.669 malloc_lvol_verify 00:33:25.669 00:52:58 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:25.669 1a20932e-3121-4d6f-aabd-f17aaf6692db 00:33:25.669 00:52:59 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:25.933 fc985cf4-72bc-49c3-920b-db7944342e25 00:33:25.933 00:52:59 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:26.192 /dev/nbd0 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:26.192 mke2fs 1.46.5 (30-Dec-2021) 00:33:26.192 00:33:26.192 Filesystem too small for a journal 00:33:26.192 Discarding device blocks: 0/1024 done 00:33:26.192 Creating filesystem with 1024 4k blocks and 1024 inodes 00:33:26.192 00:33:26.192 Allocating group tables: 0/1 done 00:33:26.192 Writing inode tables: 0/1 done 00:33:26.192 Writing superblocks and filesystem accounting information: 0/1 done 00:33:26.192 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@51 -- # local i 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.192 00:52:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@41 -- # break 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:26.451 00:52:59 -- bdev/nbd_common.sh@147 -- # return 0 00:33:26.451 00:52:59 -- bdev/blockdev.sh@326 -- # killprocess 149421 00:33:26.451 00:52:59 -- common/autotest_common.sh@936 -- # '[' -z 149421 ']' 00:33:26.451 00:52:59 -- common/autotest_common.sh@940 -- # kill -0 149421 00:33:26.451 00:52:59 -- common/autotest_common.sh@941 -- # uname 00:33:26.451 00:52:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:26.451 00:52:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 149421 00:33:26.451 00:52:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:26.451 00:52:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:26.451 00:52:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 149421' 00:33:26.451 killing process with pid 149421 00:33:26.451 00:52:59 -- common/autotest_common.sh@955 -- # kill 149421 00:33:26.451 00:52:59 -- common/autotest_common.sh@960 -- # wait 149421 00:33:27.829 ************************************ 00:33:27.829 END TEST bdev_nbd 00:33:27.829 ************************************ 00:33:27.829 00:53:01 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:33:27.829 00:33:27.829 real 0m6.132s 00:33:27.829 user 0m8.697s 00:33:27.829 sys 0m1.323s 00:33:27.829 00:53:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:27.829 00:53:01 -- common/autotest_common.sh@10 -- # set +x 00:33:27.829 00:53:01 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:33:27.829 00:53:01 -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:33:27.829 00:53:01 -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:33:27.829 00:53:01 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:33:27.829 00:53:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:27.829 00:53:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:27.829 00:53:01 -- common/autotest_common.sh@10 -- # set +x 00:33:27.829 ************************************ 00:33:27.829 START TEST bdev_fio 00:33:27.829 ************************************ 00:33:27.829 00:53:01 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:33:27.829 00:53:01 -- bdev/blockdev.sh@331 -- # local env_context 00:33:27.829 00:53:01 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:33:27.829 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:33:27.829 00:53:01 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:27.829 00:53:01 -- bdev/blockdev.sh@339 -- # echo '' 00:33:27.829 00:53:01 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:33:27.829 00:53:01 -- bdev/blockdev.sh@339 -- # env_context= 00:33:27.829 00:53:01 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:33:27.829 00:53:01 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:27.829 00:53:01 -- common/autotest_common.sh@1267 -- # local workload=verify 00:33:27.829 00:53:01 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:33:27.829 00:53:01 -- common/autotest_common.sh@1269 -- # local env_context= 00:33:27.829 00:53:01 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:33:27.829 00:53:01 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:27.829 00:53:01 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:33:27.829 00:53:01 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:33:27.829 00:53:01 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:27.829 00:53:01 -- common/autotest_common.sh@1287 -- # cat 00:33:27.829 00:53:01 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:33:27.829 00:53:01 -- common/autotest_common.sh@1300 -- # cat 00:33:27.829 00:53:01 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:33:27.830 00:53:01 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:33:28.088 00:53:01 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:28.088 00:53:01 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:33:28.088 00:53:01 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:33:28.088 00:53:01 -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:33:28.088 00:53:01 -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:33:28.088 00:53:01 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:33:28.088 00:53:01 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:28.088 00:53:01 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:33:28.089 00:53:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:28.089 00:53:01 -- common/autotest_common.sh@10 -- # set +x 00:33:28.089 ************************************ 00:33:28.089 START TEST bdev_fio_rw_verify 00:33:28.089 ************************************ 00:33:28.089 00:53:01 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:28.089 00:53:01 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:28.089 00:53:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:33:28.089 00:53:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:28.089 00:53:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:33:28.089 00:53:01 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:28.089 00:53:01 -- common/autotest_common.sh@1327 -- # shift 00:33:28.089 00:53:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:33:28.089 00:53:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.089 00:53:01 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:28.089 00:53:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:33:28.089 00:53:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:33:28.089 00:53:01 -- common/autotest_common.sh@1331 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:33:28.089 00:53:01 -- common/autotest_common.sh@1332 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:33:28.089 00:53:01 -- common/autotest_common.sh@1333 -- # break 00:33:28.089 00:53:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:28.089 00:53:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:33:28.347 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.347 fio-3.35 00:33:28.347 Starting 1 thread 00:33:40.544 00:33:40.544 job_raid5f: (groupid=0, jobs=1): err= 0: pid=149668: Sat Apr 27 00:53:12 2024 00:33:40.544 read: IOPS=10.4k, BW=40.6MiB/s (42.5MB/s)(406MiB/10001msec) 00:33:40.544 slat (usec): min=18, max=404, avg=22.65, stdev= 6.20 00:33:40.544 clat (usec): min=11, max=764, avg=149.88, stdev=57.98 00:33:40.544 lat (usec): min=33, max=786, avg=172.52, stdev=59.31 00:33:40.544 clat percentiles (usec): 00:33:40.544 | 50.000th=[ 147], 99.000th=[ 289], 99.900th=[ 334], 99.990th=[ 578], 00:33:40.544 | 99.999th=[ 750] 00:33:40.544 write: IOPS=10.9k, BW=42.6MiB/s (44.7MB/s)(421MiB/9870msec); 0 zone resets 00:33:40.544 slat (usec): min=9, max=593, avg=20.48, stdev= 6.95 00:33:40.544 clat (usec): min=66, max=1230, avg=353.55, stdev=63.21 00:33:40.544 lat (usec): min=84, max=1316, avg=374.03, stdev=65.37 00:33:40.544 clat percentiles (usec): 00:33:40.544 | 50.000th=[ 351], 99.000th=[ 515], 99.900th=[ 807], 99.990th=[ 1090], 00:33:40.544 | 99.999th=[ 1188] 00:33:40.544 bw ( KiB/s): min=38152, max=47112, per=98.80%, avg=43132.63, stdev=2156.35, samples=19 00:33:40.544 iops : min= 9538, max=11778, avg=10783.16, stdev=539.09, samples=19 00:33:40.544 lat (usec) : 20=0.01%, 50=0.01%, 100=11.51%, 250=36.78%, 500=50.90% 00:33:40.544 lat (usec) : 750=0.75%, 1000=0.05% 00:33:40.544 lat (msec) : 2=0.01% 00:33:40.544 cpu : usr=98.95%, sys=0.88%, ctx=152, majf=0, minf=7397 00:33:40.544 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.544 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.544 issued rwts: total=103863,107718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.544 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:40.544 00:33:40.544 Run status group 0 (all jobs): 00:33:40.544 READ: bw=40.6MiB/s (42.5MB/s), 40.6MiB/s-40.6MiB/s (42.5MB/s-42.5MB/s), io=406MiB (425MB), run=10001-10001msec 00:33:40.544 WRITE: bw=42.6MiB/s (44.7MB/s), 42.6MiB/s-42.6MiB/s (44.7MB/s-44.7MB/s), io=421MiB (441MB), run=9870-9870msec 00:33:40.544 ----------------------------------------------------- 00:33:40.544 Suppressions used: 00:33:40.544 count bytes template 00:33:40.544 1 7 /usr/src/fio/parse.c 00:33:40.544 613 58848 /usr/src/fio/iolog.c 00:33:40.544 1 904 libcrypto.so 00:33:40.544 ----------------------------------------------------- 00:33:40.544 00:33:40.544 00:33:40.544 real 0m12.437s 00:33:40.544 user 0m13.010s 00:33:40.544 sys 0m0.716s 00:33:40.544 00:53:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:40.544 ************************************ 00:33:40.544 00:53:13 -- common/autotest_common.sh@10 -- # set +x 00:33:40.544 END TEST bdev_fio_rw_verify 00:33:40.544 ************************************ 00:33:40.544 00:53:13 -- bdev/blockdev.sh@350 -- # rm -f 00:33:40.544 00:53:13 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:40.544 00:53:13 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:33:40.544 00:53:13 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:40.544 00:53:13 -- common/autotest_common.sh@1267 -- # local workload=trim 00:33:40.544 00:53:13 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:33:40.544 00:53:13 -- common/autotest_common.sh@1269 -- # local env_context= 00:33:40.544 00:53:13 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:33:40.544 00:53:13 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:33:40.544 00:53:13 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:33:40.544 00:53:13 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:33:40.544 00:53:13 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:40.544 00:53:13 -- common/autotest_common.sh@1287 -- # cat 00:33:40.544 00:53:13 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:33:40.544 00:53:13 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:33:40.544 00:53:13 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:33:40.544 00:53:13 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2d13ab63-4572-47a4-8dce-f9e778e46355"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2d13ab63-4572-47a4-8dce-f9e778e46355",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2d13ab63-4572-47a4-8dce-f9e778e46355",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "8a5dc900-9a09-4564-8756-ebe8a9badb05",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "68bef8aa-75e9-483b-aca8-2cd256dc39bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "71610d91-18df-422a-8d1d-53cd003a958a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:33:40.544 00:53:13 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:40.544 00:53:14 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:33:40.544 00:53:14 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:33:40.544 /home/vagrant/spdk_repo/spdk 00:33:40.544 00:53:14 -- bdev/blockdev.sh@362 -- # popd 00:33:40.544 00:53:14 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:33:40.544 00:53:14 -- bdev/blockdev.sh@364 -- # return 0 00:33:40.544 00:33:40.544 real 0m12.638s 00:33:40.544 user 0m13.120s 00:33:40.544 sys 0m0.804s 00:33:40.544 00:53:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:40.544 00:53:14 -- common/autotest_common.sh@10 -- # set +x 00:33:40.544 ************************************ 00:33:40.544 END TEST bdev_fio 00:33:40.544 ************************************ 00:33:40.544 00:53:14 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:40.544 00:53:14 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:40.544 00:53:14 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:33:40.545 00:53:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:40.545 00:53:14 -- common/autotest_common.sh@10 -- # set +x 00:33:40.545 ************************************ 00:33:40.545 START TEST bdev_verify 00:33:40.545 ************************************ 00:33:40.545 00:53:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:40.803 [2024-04-27 00:53:14.156055] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:40.803 [2024-04-27 00:53:14.156252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149843 ] 00:33:40.803 [2024-04-27 00:53:14.325811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:41.061 [2024-04-27 00:53:14.485773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.061 [2024-04-27 00:53:14.485783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.628 Running I/O for 5 seconds... 00:33:46.896 00:33:46.896 Latency(us) 00:33:46.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.896 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:46.896 Verification LBA range: start 0x0 length 0x2000 00:33:46.896 raid5f : 5.01 7782.92 30.40 0.00 0.00 24728.24 182.46 20375.74 00:33:46.896 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:46.896 Verification LBA range: start 0x2000 length 0x2000 00:33:46.896 raid5f : 5.01 7819.86 30.55 0.00 0.00 24630.35 87.51 19660.80 00:33:46.896 =================================================================================================================== 00:33:46.896 Total : 15602.78 60.95 0.00 0.00 24679.15 87.51 20375.74 00:33:47.831 00:33:47.831 real 0m7.137s 00:33:47.831 user 0m13.098s 00:33:47.831 sys 0m0.269s 00:33:47.831 00:53:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:47.831 00:53:21 -- common/autotest_common.sh@10 -- # set +x 00:33:47.831 ************************************ 00:33:47.831 END TEST bdev_verify 00:33:47.831 ************************************ 00:33:47.831 00:53:21 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:47.831 00:53:21 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:33:47.831 00:53:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:47.831 00:53:21 -- common/autotest_common.sh@10 -- # set +x 00:33:47.831 ************************************ 00:33:47.831 START TEST bdev_verify_big_io 00:33:47.831 ************************************ 00:33:47.831 00:53:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:47.831 [2024-04-27 00:53:21.384269] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:47.831 [2024-04-27 00:53:21.384472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149950 ] 00:33:48.089 [2024-04-27 00:53:21.552827] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:48.347 [2024-04-27 00:53:21.746216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.347 [2024-04-27 00:53:21.746221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.911 Running I/O for 5 seconds... 00:33:54.172 00:33:54.172 Latency(us) 00:33:54.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.172 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:54.172 Verification LBA range: start 0x0 length 0x200 00:33:54.172 raid5f : 5.36 426.25 26.64 0.00 0.00 7446999.84 202.01 388926.37 00:33:54.172 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:54.172 Verification LBA range: start 0x200 length 0x200 00:33:54.172 raid5f : 5.26 421.85 26.37 0.00 0.00 7452896.56 197.35 396552.38 00:33:54.172 =================================================================================================================== 00:33:54.172 Total : 848.09 53.01 0.00 0.00 7449906.97 197.35 396552.38 00:33:55.545 00:33:55.545 real 0m7.522s 00:33:55.545 user 0m13.866s 00:33:55.545 sys 0m0.265s 00:33:55.545 00:53:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:55.545 00:53:28 -- common/autotest_common.sh@10 -- # set +x 00:33:55.545 ************************************ 00:33:55.545 END TEST bdev_verify_big_io 00:33:55.545 ************************************ 00:33:55.545 00:53:28 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:55.545 00:53:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:55.545 00:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:55.545 00:53:28 -- common/autotest_common.sh@10 -- # set +x 00:33:55.545 ************************************ 00:33:55.545 START TEST bdev_write_zeroes 00:33:55.545 ************************************ 00:33:55.545 00:53:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:55.545 [2024-04-27 00:53:28.990523] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:55.545 [2024-04-27 00:53:28.990927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150062 ] 00:33:55.803 [2024-04-27 00:53:29.159376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.803 [2024-04-27 00:53:29.338805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:56.370 Running I/O for 1 seconds... 00:33:57.305 00:33:57.305 Latency(us) 00:33:57.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.305 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:33:57.305 raid5f : 1.01 25382.56 99.15 0.00 0.00 5025.96 1697.98 6970.65 00:33:57.305 =================================================================================================================== 00:33:57.305 Total : 25382.56 99.15 0.00 0.00 5025.96 1697.98 6970.65 00:33:58.679 00:33:58.679 real 0m3.077s 00:33:58.679 user 0m2.697s 00:33:58.679 sys 0m0.265s 00:33:58.680 00:53:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:58.680 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:33:58.680 ************************************ 00:33:58.680 END TEST bdev_write_zeroes 00:33:58.680 ************************************ 00:33:58.680 00:53:32 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:58.680 00:53:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:58.680 00:53:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:58.680 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:33:58.680 ************************************ 00:33:58.680 START TEST bdev_json_nonenclosed 00:33:58.680 ************************************ 00:33:58.680 00:53:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:58.680 [2024-04-27 00:53:32.161327] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:58.680 [2024-04-27 00:53:32.161658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150123 ] 00:33:58.938 [2024-04-27 00:53:32.329164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.938 [2024-04-27 00:53:32.491173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:58.938 [2024-04-27 00:53:32.491325] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:33:58.938 [2024-04-27 00:53:32.491363] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:58.938 [2024-04-27 00:53:32.491387] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:33:59.507 00:33:59.507 real 0m0.761s 00:33:59.507 user 0m0.513s 00:33:59.507 sys 0m0.147s 00:33:59.507 00:53:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:59.507 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.507 ************************************ 00:33:59.507 END TEST bdev_json_nonenclosed 00:33:59.507 ************************************ 00:33:59.507 00:53:32 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:59.507 00:53:32 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:33:59.507 00:53:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:59.507 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:33:59.507 ************************************ 00:33:59.507 START TEST bdev_json_nonarray 00:33:59.507 ************************************ 00:33:59.507 00:53:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:59.507 [2024-04-27 00:53:32.997185] Starting SPDK v24.05-pre git sha1 6651b13f7 / DPDK 23.11.0 initialization... 00:33:59.508 [2024-04-27 00:53:32.997363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150166 ] 00:33:59.768 [2024-04-27 00:53:33.152356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.768 [2024-04-27 00:53:33.334899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.768 [2024-04-27 00:53:33.335055] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:33:59.768 [2024-04-27 00:53:33.335094] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:33:59.768 [2024-04-27 00:53:33.335121] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:00.334 00:34:00.334 real 0m0.717s 00:34:00.334 user 0m0.501s 00:34:00.334 sys 0m0.116s 00:34:00.334 00:53:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:00.334 00:53:33 -- common/autotest_common.sh@10 -- # set +x 00:34:00.334 ************************************ 00:34:00.334 END TEST bdev_json_nonarray 00:34:00.334 ************************************ 00:34:00.334 00:53:33 -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:34:00.334 00:53:33 -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:34:00.334 00:53:33 -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:34:00.334 00:53:33 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:34:00.334 00:53:33 -- bdev/blockdev.sh@811 -- # cleanup 00:34:00.334 00:53:33 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:00.334 00:53:33 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:00.334 00:53:33 -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:34:00.334 00:53:33 -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:34:00.334 00:53:33 -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:34:00.334 00:53:33 -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:34:00.334 00:34:00.334 real 0m47.729s 00:34:00.334 user 1m5.181s 00:34:00.334 sys 0m4.738s 00:34:00.334 00:53:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:00.334 00:53:33 -- common/autotest_common.sh@10 -- # set +x 00:34:00.334 ************************************ 00:34:00.334 END TEST blockdev_raid5f 00:34:00.334 ************************************ 00:34:00.334 00:53:33 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:34:00.334 00:53:33 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:34:00.334 00:53:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:00.334 00:53:33 -- common/autotest_common.sh@10 -- # set +x 00:34:00.334 00:53:33 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:34:00.334 00:53:33 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:34:00.334 00:53:33 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:34:00.334 00:53:33 -- common/autotest_common.sh@10 -- # set +x 00:34:01.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:01.716 Waiting for block devices as requested 00:34:01.975 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:02.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:34:02.492 Cleaning 00:34:02.492 Removing: /var/run/dpdk/spdk0/config 00:34:02.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:02.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:02.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:02.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:02.492 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:02.492 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:02.492 Removing: /dev/shm/spdk_tgt_trace.pid110308 00:34:02.492 Removing: /var/run/dpdk/spdk0 00:34:02.492 Removing: /var/run/dpdk/spdk_pid110040 00:34:02.492 Removing: /var/run/dpdk/spdk_pid110308 00:34:02.492 Removing: /var/run/dpdk/spdk_pid110574 00:34:02.492 Removing: /var/run/dpdk/spdk_pid110697 00:34:02.492 Removing: /var/run/dpdk/spdk_pid110756 00:34:02.492 Removing: /var/run/dpdk/spdk_pid110905 00:34:02.492 Removing: /var/run/dpdk/spdk_pid110928 00:34:02.492 Removing: /var/run/dpdk/spdk_pid111094 00:34:02.492 Removing: /var/run/dpdk/spdk_pid111363 00:34:02.492 Removing: /var/run/dpdk/spdk_pid111547 00:34:02.492 Removing: /var/run/dpdk/spdk_pid111661 00:34:02.492 Removing: /var/run/dpdk/spdk_pid111767 00:34:02.492 Removing: /var/run/dpdk/spdk_pid111893 00:34:02.492 Removing: /var/run/dpdk/spdk_pid112004 00:34:02.492 Removing: /var/run/dpdk/spdk_pid112061 00:34:02.492 Removing: /var/run/dpdk/spdk_pid112117 00:34:02.492 Removing: /var/run/dpdk/spdk_pid112195 00:34:02.492 Removing: /var/run/dpdk/spdk_pid112326 00:34:02.492 Removing: /var/run/dpdk/spdk_pid112862 00:34:02.492 Removing: /var/run/dpdk/spdk_pid112946 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113022 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113050 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113185 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113206 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113346 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113367 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113440 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113463 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113539 00:34:02.492 Removing: /var/run/dpdk/spdk_pid113561 00:34:02.493 Removing: /var/run/dpdk/spdk_pid113768 00:34:02.493 Removing: /var/run/dpdk/spdk_pid113820 00:34:02.493 Removing: /var/run/dpdk/spdk_pid113864 00:34:02.493 Removing: /var/run/dpdk/spdk_pid113956 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114053 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114105 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114218 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114272 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114327 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114392 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114452 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114514 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114581 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114636 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114700 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114760 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114820 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114881 00:34:02.493 Removing: /var/run/dpdk/spdk_pid114944 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115000 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115062 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115122 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115186 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115244 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115314 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115369 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115433 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115530 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115676 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115855 00:34:02.493 Removing: /var/run/dpdk/spdk_pid115949 00:34:02.493 Removing: /var/run/dpdk/spdk_pid116010 00:34:02.493 Removing: /var/run/dpdk/spdk_pid117263 00:34:02.493 Removing: /var/run/dpdk/spdk_pid117491 00:34:02.493 Removing: /var/run/dpdk/spdk_pid117702 00:34:02.493 Removing: /var/run/dpdk/spdk_pid117831 00:34:02.493 Removing: /var/run/dpdk/spdk_pid117979 00:34:02.493 Removing: /var/run/dpdk/spdk_pid118057 00:34:02.493 Removing: /var/run/dpdk/spdk_pid118101 00:34:02.493 Removing: /var/run/dpdk/spdk_pid118138 00:34:02.493 Removing: /var/run/dpdk/spdk_pid118635 00:34:02.493 Removing: /var/run/dpdk/spdk_pid118734 00:34:02.493 Removing: /var/run/dpdk/spdk_pid118844 00:34:02.493 Removing: /var/run/dpdk/spdk_pid118911 00:34:02.493 Removing: /var/run/dpdk/spdk_pid120169 00:34:02.493 Removing: /var/run/dpdk/spdk_pid121096 00:34:02.493 Removing: /var/run/dpdk/spdk_pid122017 00:34:02.493 Removing: /var/run/dpdk/spdk_pid123171 00:34:02.752 Removing: /var/run/dpdk/spdk_pid124273 00:34:02.752 Removing: /var/run/dpdk/spdk_pid125386 00:34:02.752 Removing: /var/run/dpdk/spdk_pid126910 00:34:02.752 Removing: /var/run/dpdk/spdk_pid128230 00:34:02.752 Removing: /var/run/dpdk/spdk_pid129472 00:34:02.752 Removing: /var/run/dpdk/spdk_pid130156 00:34:02.752 Removing: /var/run/dpdk/spdk_pid130709 00:34:02.752 Removing: /var/run/dpdk/spdk_pid131340 00:34:02.752 Removing: /var/run/dpdk/spdk_pid131840 00:34:02.752 Removing: /var/run/dpdk/spdk_pid132406 00:34:02.752 Removing: /var/run/dpdk/spdk_pid132966 00:34:02.752 Removing: /var/run/dpdk/spdk_pid133630 00:34:02.752 Removing: /var/run/dpdk/spdk_pid134160 00:34:02.752 Removing: /var/run/dpdk/spdk_pid135565 00:34:02.752 Removing: /var/run/dpdk/spdk_pid136184 00:34:02.752 Removing: /var/run/dpdk/spdk_pid136734 00:34:02.752 Removing: /var/run/dpdk/spdk_pid138276 00:34:02.752 Removing: /var/run/dpdk/spdk_pid138963 00:34:02.752 Removing: /var/run/dpdk/spdk_pid139584 00:34:02.752 Removing: /var/run/dpdk/spdk_pid140364 00:34:02.752 Removing: /var/run/dpdk/spdk_pid140418 00:34:02.752 Removing: /var/run/dpdk/spdk_pid140471 00:34:02.752 Removing: /var/run/dpdk/spdk_pid140529 00:34:02.752 Removing: /var/run/dpdk/spdk_pid140674 00:34:02.752 Removing: /var/run/dpdk/spdk_pid140821 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141055 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141357 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141383 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141441 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141471 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141499 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141531 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141559 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141591 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141624 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141652 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141680 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141712 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141743 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141774 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141805 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141833 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141865 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141893 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141927 00:34:02.752 Removing: /var/run/dpdk/spdk_pid141951 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142014 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142035 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142081 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142169 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142229 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142256 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142305 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142338 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142360 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142423 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142453 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142501 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142533 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142558 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142586 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142608 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142632 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142661 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142685 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142741 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142799 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142826 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142881 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142908 00:34:02.752 Removing: /var/run/dpdk/spdk_pid142932 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143004 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143031 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143078 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143110 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143130 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143154 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143182 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143207 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143231 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143253 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143364 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143464 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143632 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143667 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143731 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143799 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143837 00:34:02.752 Removing: /var/run/dpdk/spdk_pid143872 00:34:03.011 Removing: /var/run/dpdk/spdk_pid143902 00:34:03.011 Removing: /var/run/dpdk/spdk_pid143954 00:34:03.011 Removing: /var/run/dpdk/spdk_pid143991 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144084 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144155 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144225 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144526 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144674 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144731 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144833 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144922 00:34:03.011 Removing: /var/run/dpdk/spdk_pid144976 00:34:03.011 Removing: /var/run/dpdk/spdk_pid145249 00:34:03.011 Removing: /var/run/dpdk/spdk_pid145362 00:34:03.011 Removing: /var/run/dpdk/spdk_pid145468 00:34:03.011 Removing: /var/run/dpdk/spdk_pid145529 00:34:03.011 Removing: /var/run/dpdk/spdk_pid145571 00:34:03.011 Removing: /var/run/dpdk/spdk_pid145660 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146089 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146144 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146473 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146580 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146694 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146761 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146797 00:34:03.011 Removing: /var/run/dpdk/spdk_pid146839 00:34:03.011 Removing: /var/run/dpdk/spdk_pid148268 00:34:03.011 Removing: /var/run/dpdk/spdk_pid148408 00:34:03.011 Removing: /var/run/dpdk/spdk_pid148421 00:34:03.011 Removing: /var/run/dpdk/spdk_pid148444 00:34:03.011 Removing: /var/run/dpdk/spdk_pid148950 00:34:03.011 Removing: /var/run/dpdk/spdk_pid149063 00:34:03.011 Removing: /var/run/dpdk/spdk_pid149215 00:34:03.011 Removing: /var/run/dpdk/spdk_pid149293 00:34:03.011 Removing: /var/run/dpdk/spdk_pid149348 00:34:03.011 Removing: /var/run/dpdk/spdk_pid149649 00:34:03.011 Removing: /var/run/dpdk/spdk_pid149843 00:34:03.011 Removing: /var/run/dpdk/spdk_pid149950 00:34:03.011 Removing: /var/run/dpdk/spdk_pid150062 00:34:03.011 Removing: /var/run/dpdk/spdk_pid150123 00:34:03.011 Removing: /var/run/dpdk/spdk_pid150166 00:34:03.011 Clean 00:34:03.011 00:53:36 -- common/autotest_common.sh@1437 -- # return 0 00:34:03.011 00:53:36 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:34:03.011 00:53:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:03.011 00:53:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.269 00:53:36 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:34:03.270 00:53:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:03.270 00:53:36 -- common/autotest_common.sh@10 -- # set +x 00:34:03.270 00:53:36 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:03.270 00:53:36 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:03.270 00:53:36 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:03.270 00:53:36 -- spdk/autotest.sh@389 -- # hash lcov 00:34:03.270 00:53:36 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:03.270 00:53:36 -- spdk/autotest.sh@391 -- # hostname 00:34:03.270 00:53:36 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:03.528 geninfo: WARNING: invalid characters removed from testname! 00:34:50.199 00:54:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:50.199 00:54:21 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:51.133 00:54:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:54.442 00:54:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:56.973 00:54:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:00.259 00:54:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.545 00:54:36 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:03.545 00:54:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:03.545 00:54:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:03.545 00:54:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.545 00:54:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.545 00:54:36 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.545 00:54:36 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.545 00:54:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.546 00:54:36 -- paths/export.sh@5 -- $ export PATH 00:35:03.546 00:54:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:35:03.546 00:54:36 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:35:03.546 00:54:36 -- common/autobuild_common.sh@435 -- $ date +%s 00:35:03.546 00:54:36 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714179276.XXXXXX 00:35:03.546 00:54:36 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714179276.Baq0Fw 00:35:03.546 00:54:36 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:35:03.546 00:54:36 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:35:03.546 00:54:36 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:35:03.546 00:54:36 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:35:03.546 00:54:36 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:35:03.546 00:54:36 -- common/autobuild_common.sh@451 -- $ get_config_params 00:35:03.546 00:54:36 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:35:03.546 00:54:36 -- common/autotest_common.sh@10 -- $ set +x 00:35:03.546 00:54:36 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:35:03.546 00:54:36 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:35:03.546 00:54:36 -- pm/common@17 -- $ local monitor 00:35:03.546 00:54:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:03.546 00:54:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=151673 00:35:03.546 00:54:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:03.546 00:54:36 -- pm/common@21 -- $ date +%s 00:35:03.546 00:54:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=151675 00:35:03.546 00:54:36 -- pm/common@26 -- $ sleep 1 00:35:03.546 00:54:36 -- pm/common@21 -- $ date +%s 00:35:03.546 00:54:36 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714179276 00:35:03.546 00:54:36 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714179276 00:35:03.546 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714179276_collect-vmstat.pm.log 00:35:03.546 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714179276_collect-cpu-load.pm.log 00:35:04.482 00:54:37 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:35:04.482 00:54:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:35:04.482 00:54:37 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:35:04.482 00:54:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:04.482 00:54:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:04.482 00:54:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:04.482 00:54:37 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:04.482 00:54:37 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:04.482 00:54:37 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:04.482 00:54:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:04.482 00:54:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:04.482 00:54:37 -- pm/common@30 -- $ signal_monitor_resources TERM 00:35:04.482 00:54:37 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:35:04.482 00:54:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:04.482 00:54:37 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:35:04.482 00:54:37 -- pm/common@45 -- $ pid=151681 00:35:04.482 00:54:37 -- pm/common@52 -- $ sudo kill -TERM 151681 00:35:04.482 00:54:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:04.482 00:54:37 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:35:04.482 00:54:37 -- pm/common@45 -- $ pid=151680 00:35:04.482 00:54:37 -- pm/common@52 -- $ sudo kill -TERM 151680 00:35:04.482 + [[ -n 2102 ]] 00:35:04.482 + sudo kill 2102 00:35:04.491 [Pipeline] } 00:35:04.510 [Pipeline] // timeout 00:35:04.515 [Pipeline] } 00:35:04.531 [Pipeline] // stage 00:35:04.536 [Pipeline] } 00:35:04.552 [Pipeline] // catchError 00:35:04.561 [Pipeline] stage 00:35:04.563 [Pipeline] { (Stop VM) 00:35:04.577 [Pipeline] sh 00:35:04.911 + vagrant halt 00:35:08.196 ==> default: Halting domain... 00:35:18.178 [Pipeline] sh 00:35:18.457 + vagrant destroy -f 00:35:21.742 ==> default: Removing domain... 00:35:21.757 [Pipeline] sh 00:35:22.042 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest_2/output 00:35:22.053 [Pipeline] } 00:35:22.074 [Pipeline] // stage 00:35:22.080 [Pipeline] } 00:35:22.100 [Pipeline] // dir 00:35:22.105 [Pipeline] } 00:35:22.125 [Pipeline] // wrap 00:35:22.132 [Pipeline] } 00:35:22.150 [Pipeline] // catchError 00:35:22.160 [Pipeline] stage 00:35:22.163 [Pipeline] { (Epilogue) 00:35:22.180 [Pipeline] sh 00:35:22.461 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:40.556 [Pipeline] catchError 00:35:40.558 [Pipeline] { 00:35:40.571 [Pipeline] sh 00:35:40.849 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:41.106 Artifacts sizes are good 00:35:41.115 [Pipeline] } 00:35:41.132 [Pipeline] // catchError 00:35:41.144 [Pipeline] archiveArtifacts 00:35:41.150 Archiving artifacts 00:35:41.467 [Pipeline] cleanWs 00:35:41.476 [WS-CLEANUP] Deleting project workspace... 00:35:41.476 [WS-CLEANUP] Deferred wipeout is used... 00:35:41.481 [WS-CLEANUP] done 00:35:41.483 [Pipeline] } 00:35:41.500 [Pipeline] // stage 00:35:41.506 [Pipeline] } 00:35:41.521 [Pipeline] // node 00:35:41.526 [Pipeline] End of Pipeline 00:35:41.560 Finished: SUCCESS